ABSTRACT
Despite significant empirical and theoretically supported evidence that non-static parameter choices can be strongly beneficial in evolutionary computation, the question how to best adjust parameter values plays only a marginal role in contemporary research on discrete black-box optimization. This has led to the unsatisfactory situation in which feedback-free parameter selection rules such as the cooling schedule of Simulated Annealing are predominant in state-of-the-art heuristics, while, at the same time, we understand very well that such time-dependent selection rules can not perform as well as adjustment rules that do take into account the evolution of the optimization process. A number of adaptive and self-adaptive parameter control strategies have been proposed in the literature, but did not (yet) make their way to a broader public. A key obstacle seems to lie in their rather complex update rules.
The purpose of our work is to demonstrate that high-performing online parameter selection rules do not have to be very complicated. More precisely, we experiment with a multiplicative, comparison-based update rule to adjust the mutation rate of a (1+1) Evolutionary Algorithm. We show that this simple self-adjusting rule outperforms the best static unary unbiased black-box algorithm on LeadingOnes, achieving an almost optimal speedup of about 18%. We also observe promising performance on the OneMax benchmark problem.
- Anne Auger. 2009. Benchmarking the (1+1) evolution strategy with one-fifth success rule on the BBOB-2009 function testbed. In Companion Material GECCO'09. ACM, 2447--2452. Google ScholarDigital Library
- Süntje Böttcher, Benjamin Doerr, and Frank Neumann. 2010. Optimal Fixed and Adaptive Mutation Rates for the LeadingOnes Problem. In PPSN'10 (LNCS), Vol. 6238. Springer, 1--10. Google ScholarDigital Library
- Benjamin Doerr and Carola Doerr. 2015. Optimal Parameter Choices Through Self-Adjustment: Applying the 1/5-th Rule in Discrete Settings. In GECCO'15. ACM, 1335--1342. Google ScholarDigital Library
- Benjamin Doerr and Carola Doerr. 2016. The Impact of Random Initialization on the Runtime of Randomized Search Heuristics. Algorithmica 75 (2016), 529--553. Google ScholarDigital Library
- Benjamin Doerr and Carola Doerr. 2018. Optimal Static and Self-Adjusting Parameter Choices for the (1 + (λ, λ)) Genetic Algorithm. Algorithmica 80 (2018), 1658--1709. Google ScholarDigital Library
- Benjamin Doerr and Carola Doerr. 2018. Theory of Parameter Control Mechanisms for Discrete Black-Box Optimization: Provable Performance Gains Through Dynamic Parameter Choices. In Theory of Randomized Search Heuristics in Discrete Search Spaces, Benjamin Doerr and Frank Neumann (Eds.). Springer. To appear.Google Scholar
- Benjamin Doerr, Carola Doerr, and Timo Kötzing. 2016. Provably Optimal Self-Adjusting Step Sizes for Multi-Valued Decision Variables. In PPSN'16 (LNCS), Vol. 9921. Springer, 782--791.Google Scholar
- Benjamin Doerr, Carola Doerr, and Jing Yang. 2016. k-Bit Mutation with Self-Adjusting k Outperforms Standard Bit Mutation. In PPSN'16 (LNCS), Vol. 9921. Springer, 824--834.Google Scholar
- Benjamin Doerr, Carola Doerr, and Jing Yang. 2016. Optimal Parameter Choices via Precise Black-Box Analysis. In GECCO'16. ACM, 1123--1130. Google ScholarDigital Library
- Benjamin Doerr, Christian Gießen, Carsten Witt, and Jing Yang. 2017. The (1 + λ) Evolutionary Algorithm with Self-Adjusting Mutation Rate. In GECCO'17. ACM, 1351--1358. Google ScholarDigital Library
- Carola Doerr and Johannes Lengler. 2016. The (1 + 1) Elitist Black-Box Complexity of LeadingOnes. In GECCO'16. ACM, 1131--1138. Google ScholarDigital Library
- Álvaro Fialho, Luís Da Costa, Marc Schoenauer, and Michèle Sebag. 2008. Extreme Value Based Adaptive Operator Selection. In PPSN'08 (LNCS), Vol. 5199. Springer, 175--184.Google Scholar
- Álvaro Fialho, Luís Da Costa, Marc Schoenauer, and Michèle Sebag. 2009. Dynamic Multi-Armed Bandits and Extreme Value-Based Rewards for Adaptive Operator Selection in Evolutionary Algorithms. In LION'09 (LNCS), Vol. 5851. Springer, 176--190. Google ScholarDigital Library
- Álvaro Fialho, Luís Da Costa, Marc Schoenauer, and Michèle Sebag. 2010. Analyzing bandit-based adaptive operator selection mechanisms. Annals of Mathematics and Artificial Intelligence 60 (2010), 25--64. Google ScholarDigital Library
- Thomas Jansen, Kenneth A. De Jong, and Ingo Wegener. 2005. On the Choice of the Offspring Population Size in Evolutionary Algorithms. Evolutionary Computation 13 (2005), 413--440. Google ScholarDigital Library
- Thomas Jansen and Christine Zarges. 2011. Analysis of evolutionary algorithms: from computational complexity analysis to algorithm engineering. In FOGA'11. ACM, 1--14. Google ScholarDigital Library
- Thomas Jansen and Christine Zarges. 2014. Performance analysis of randomised search heuristics operating with a fixed budget. Theoretical Computer Science 545 (2014), 39--58. Google ScholarDigital Library
- G. Karafotias, M. Hoogendoorn, and A.E. Eiben. 2015. Parameter Control in Evolutionary Algorithms: Trends and Challenges. IEEE Transactions on Evolutionary Computation 19 (2015), 167--187.Google ScholarDigital Library
- Stefan Kern, Sibylle D. Müller, Nikolaus Hansen, Dirk Büche, Jiri Ocenasek, and Petros Koumoutsakos. 2004. Learning probability distributions in continuous evolutionary algorithms - a comparative review. Natural Computing 3 (2004), 77--112. Google ScholarDigital Library
- Scott Kirkpatrick, C. D. Gelatt, and Mario P. Vecchi. 1983. Optimization by Simulated Annealing. Science 220, 4598 (1983), 671--680.Google Scholar
- Jörg Lässig and Dirk Sudholt. 2011. Adaptive population models for offspring populations and parallel evolutionary algorithms. In FOGA'11. ACM, 181--192. Google ScholarDigital Library
- Per Kristian Lehre and Carsten Witt. 2012. Black-Box Search by Unbiased Variation. Algorithmica 64 (2012), 623--642. Google ScholarDigital Library
- Andrei Lissovoi, Pietro Simone Oliveto, and John Alasdair Warwicker. 2017. On the runtime analysis of generalised selection hyper-heuristics for pseudo-boolean optimisation. In GECCO'17. ACM, 849--856. Google ScholarDigital Library
- Eduardo Carvalho Pinto and Carola Doerr. 2017. Discussion of a More Practice-Aware Runtime Analysis for Evolutionary Algorithms. In EA'17. 298--305.Google Scholar
- Dirk Thierens. 2005. An Adaptive Pursuit Strategy for Allocating Operator Probabilities. In GECCO'05. ACM, 1539--1546. Google ScholarDigital Library
- Dirk Thierens. 2009. On benchmark properties for adaptive operator selection. In Companion Material GECCO'09. ACM, 2217--2218. Google ScholarDigital Library
- Jano I. van Hemert and Thomas Bäck. 2002. Measuring the Searched Space to Guide Efficiency: The Principle and Evidence on Constraint Satisfaction. In PPSN'02 (LNCS), Vol. 2439. Springer, 23--32. Google ScholarDigital Library
Index Terms
- Simple on-the-fly parameter selection mechanisms for two classical discrete black-box optimization benchmark problems
Recommendations
Introducing elitist black-box models: When does elitist behavior weaken the performance of evolutionary algorithms?
Black-box complexity theory provides lower bounds for the runtime of black-box optimizers like evolutionary algorithms and other search heuristics and serves as an inspiration for the design of new genetic algorithms. Several black-box models covering ...
AMaLGaM IDEAs in noiseless black-box optimization benchmarking
GECCO '09: Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference: Late Breaking PapersThis paper describes the application of a Gaussian Estimation-of-Distribution (EDA) for real-valued optimization to the noiseless part of a benchmark introduced in 2009 called BBOB (Black-Box Optimization Benchmarking). Specifically, the EDA considered ...
Levy distributed parameter control in differential evolution for numerical optimization
Differential evolution (DE) algorithm is a population based stochastic search technique widely applied in scientific and engineering fields for global optimization over real parameter space. The performance of DE algorithm highly depends on the ...
Comments