doi: 10.17586/2226-1494-2017-17-3-409-416


THEORETICAL ANALYSIS OF DYNAMIC SELECTION OF SWITCHING AUXILIARY OBJECTIVES ON XdivK PROBLEM

I. A. Petrova, A. S. Buzdalova, A. A. Shalyto


Read the full article  ';
Article in Russian

For citation: Petrova I.A, Buzdalova A.S., Shalyto A.A. Theoretical analysis of dynamic selection of switching auxiliary objectives on XdivK problem. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2017, vol. 17, no. 3, pp. 409–416 (in Russian). doi: 10.17586/2226-1494-2017-17-2-409-416

Abstract

Subject of Research. The paper deals with analysis of the EA+RL method inefficiency reasons on XdivK optimization problem with switching auxiliary objectives. It is proposed to modify the EA+RL method. The EA+RL method increases efficiency of an evolutionary algorithm by introducing auxiliary objectives. The  XdivK problem is characterized by a large number of local optima. Switching objectives help to escape from local optima on some stages of optimization while being obstructive on the other stages. Method. To perform theoretical analysis of the EA+RL method and its proposed modification, the corresponding optimization process was modeled by Markov chains. The number of fitness function evaluations needed to reach the optimum was estimated based on the analysis of transition probabilities. Main Results. The EA+RL method and its proposed modification were theoretically analyzed on the XdivK problem with switching auxiliary objectives. It was proved that the proposed modification ignores obstructive objectives contrary to the EA+RL method. The lower and upper bounds on the running time of the proposed modification were obtained. Practical Relevance. The proposed modification increases the efficiency of EA+RL method, successfully used to solve NP-hard optimization problems, such as the test case generation problem.


Keywords: evolutionary algorithm, reinforcement learning, Markov chains, runtime analysis

Acknowledgements. The work of I.A Petrova and A.S. Buzdalova is financially supported by RFBR research project No. 16-31-00380 mol_a

References
1.          Skobtsov Yu.A., Fedorov E.E. Metaheuristics. Donetsk, Noulidzh Publ., 2013, 426 p. (In Russian)
2.          Neumann F., Wegener I. Can single objective optimization profit from multiobjective optimization? In: Multiobjective Problem Solving from Nature: From Concepts to Applications, 2008, pp. 115–130. doi: 10.1007/978-3-540-72964-8_6 
3.          Neumann F., Wegener I. Minimum spanning trees made easier via multi-objective optimization. Natural Computing, 2006, vol. 5, no. 3, pp. 305–319. doi: 10.1007/s11047-006-9004-x
4.          Knowles J.D., Watson R.A., Corne D. Reducing local optima in single-objective problems by multi-objectivization. Lecture Notes in Computer Science, 2001, vol. 1993, pp. 269–283.
5.          Segura C., Coello C.A.C., Miranda G., Leon C. Using multi-objective evolutionary algorithms for single-objective optimization. 4OR, 2013, vol. 11, no. 3, pp. 201–228. doi: 10.1007/s10288-013-0248-x
6.          Jensen M.T. Helper-objectives: using multi-objective evolutionary algorithms for single-objective optimisation: evolutionary computation combinatorial optimization. Journal of Mathematical Modelling and Algorithms, 2004, vol. 3, no. 4, pp. 323–347.
7.          Lochtefeld D.F., Ciarallo F.W. Helper objective optimization strategies for the job-shop scheduling problem. Applied Soft Computing, 2011, vol. 11, no. 6, pp. 4161–4174. doi: 10.1016/j.asoc.2011.03.007 
8.          Buzdalova A., Buzdalov M. Increasing efficiency of evolutionary algorithms by choosing between auxiliary fitness functions with reinforcement learning. Proc. 11th Int. Conf. on Machine Learning and Applications. Boca Raton, USA, 2012, vol. 1, pp. 150–155. doi: 10.1109/ICMLA.2012.32
9.          Buzdalova A.S., Buzdalov M.V. Efficiency increasing method of the evolutionary algorithms by reinforcement learning. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2012, no. 5, pp. 115–119. (In Russian)
10.       Sutton R.S., Barto A.G. Reinforcement Learning: An Introduction. Cambridge, MIT Press, 1998, 322 p.
11.       Gosavi A. Reinforcement learning: a tutorial survey and recent advances. INFORMS Journal on Computing, 2009, vol. 21, no. 2, pp. 178–192. doi: 10.1287/ijoc.1080.0305
12.       Buzdalov M., Buzdalova A. Adaptive selection of helper-objectives for test case generation. Proc. IEEE Congress on Evolutionary Computation. Cancun, Mexico,2013, vol. 1, pp. 2245–2250. doi: 10.1109/CEC.2013.6557836
13.       Buzdalov M., Buzdalova A. OneMax helps optimizing XdivK: theoretical runtime analysis for RLS and EA+RL. Proc. 16th Genetic and Evolutionary Computation Conference. Vancouver, Canada, 2014, pp. 201–202.
14.       Buzdalov M., Buzdalova A. Can OneMax Help Optimizing LeadingOnes using the EA+RL Method? Proc. IEEE Congress on Evolutionary Computation. Sendai, Japan, 2015, pp. 1762–1768. doi: 10.1109/CEC.2015.7257100
Buzdalov M., Buzdalova A., Shalyto A. A first step towards the runtime analysis of evolutionary algorithm adjusted with reinforcement learning. Proc. 12th Int. Conf. on Machine Learning and Applications. Miami, USA, 2013, vol. 1, pp. 203–208. doi: 10.1109/ICMLA.2013.42


Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
Copyright 2001-2024 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.

Яндекс.Метрика