ABSTRACT
The multi-objective covariance matrix adaptation evolution strategy (MO-CMA-ES) is an evolutionary algorithm for continuous vector-valued optimization. It combines indicator-based selection based on the contributing hypervolume with the efficient strategy parameter adaptation of the elitist covariance matrix adaptation evolution strategy (CMA-ES). Step sizes (i.e., mutation strengths) are adapted on individual-level using an improved implementation of the 1/5-th success rule. In the original MO-CMA-ES, a mutation is regarded as successful if the offspring ranks better than its parent in the elitist, rank-based selection procedure. In contrast, we propose to regard a mutation as successful if the offspring is selected into the next parental population. This criterion is easier to implement and reduces the computational complexity of the MO-CMA-ES, in particular of its steady-state variant. The new step size adaptation improves the performance of the MO-CMA-ES as shown empirically using a large set of benchmark functions. The new update scheme in general leads to larger step sizes and thereby counteracts premature convergence. The experiments comprise the first evaluation of the MO-CMA-ES for problems with more than two objectives.
- N. Beume. S-metric calculation by considering dominated hypervolume as Klee's measure problem. Evolutionary Computation, 17(4):477--492, 2009. Google ScholarDigital Library
- N. Beume, B. Naujoks, and M. Emmerich. SMS-EMOA: Multiobjective selection based on dominated hypervolume. European Journal of Operational Research, 181(3):1653--1669, 2007.Google ScholarCross Ref
- N. Beume and G. Rudolph. Faster S-metric calculation by considering dominated hypervolume as Klee's measure problem. In IASTED International Conference on Computational Intelligence, pages 231--236. ACTA Press, 2006.Google Scholar
- S. Bleuler, M. Laumanns, L. Thiele, and E. Zitzler. PISA - A platform and programming language independent interface for search algorithms. In C. M. Fonseca, P. J. Fleming, E. Zitzler, K. Deb, and L. Thiele, editors, Evolutionary Multi-Criterion Optimization (EMO 2003), volume 2632 of LNCS, pages 494--508. Springer-Verlag, 2003. Google ScholarDigital Library
- K. Bringmann and T. Friedrich. Approximating the least hypervolume contributor: NP-Hard in General, But Fast in Practice. In M. Ehrgott, C. M. Fonseca, X. Gandibleux, J.-K. Hao, and M. Sevaux, editors, EMO, volume 5467 of Lecture Notes in Computer Science, pages 6--20. Springer, 2009. Google ScholarDigital Library
- K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6:182--197, 2002. Google ScholarDigital Library
- K. Deb, L. Thiele, M. Laumanns, and E. Zitzler. Scalable multi-objective optimization test problems. In Congress on Evolutionary Computation (CEC), pages 825--830. IEEE Press, 2002.Google ScholarCross Ref
- J. Demšar. Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research, 7:1--30, 2006. Google ScholarDigital Library
- S. García and F. Herrera. An extension on "statistical comparisons of classifiers over multiple data sets" for all pairwise comparisons. Journal of Machine Learning Research, 9:2677--2694, 2008.Google Scholar
- S. García, D. Molina, M. Lozano, and F. Herrera. A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: A case study on the CEC’2005 special session on real parameter optimization. Journal of Heuristics, 15:617--644, 2009. Google ScholarDigital Library
- N. Hansen, S. D. Müller, and P. Koumoutsakos. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evolutionary Computation, 11(1):1--18, 2003. Google ScholarDigital Library
- N. Hansen and A. Ostermeier. Completely derandomized self-adaptation in evolution strategies. Evolutionary Computation, 9(2):159--195, 2001. Google ScholarDigital Library
- C. Igel, T. Glasmachers, and V. Heidrich-Meisner. Shark. Journal of Machine Learning Research, 9:993--996, 2008. Google ScholarDigital Library
- C. Igel, N. Hansen, and S. Roth. Covariance matrix adaptation for multi-objective optimization. Evolutionary Computation, 15(1):1--28, 2006. Google ScholarDigital Library
- C. Igel, T. Suttorp, and N. Hansen. A computational efficient covariance matrix update and a (1+1)-CMA for evolution strategies. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2006), pages 453--460. ACM Press, 2006. Google ScholarDigital Library
- C. Igel, T. Suttorp, and N. Hansen. Steady-state selection and efficient covariance matrix update in the multi-objective CMA-ES. In Fourth International Conference on Evolutionary Multi-Criterion Optimization (EMO 2007), volume 4403 of LNCS. Springer-Verlag, 2007. Google ScholarDigital Library
- S. Kern, S. D. Müller, N. Hansen, D. Büche, J. Ocenasek, and P. Koumoutsakos. Learning probability distributions in continuous evolutionary algorithms - a comparative review. Natural Computing, 3:77--112, 2004. Google ScholarDigital Library
- I. Rechenberg. Evolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der biologischen Evolution. Frommann-Holzboog, 1973.Google Scholar
- T. Suttorp, N. Hansen, and C. Igel. Efficient covariance matrix update for variable metric evolution strategies. Machine Learning, 75(2):167--197, 2009. Google ScholarDigital Library
- L. G. Valiant. The complexity of computing the permanent. Theoretical Computer Science, 8:189--201, 1979.Google ScholarCross Ref
- T. Voß, N. Beume, G. Rudolph, and C. Igel. Scalarization versus indicator-based selection in multi-objective CMA evolution strategies. In IEEE Congress on Evolutionary Computation 2008 (CEC 2008), pages 3041--3048. IEEE Press, 2008.Google ScholarCross Ref
- T. Voß, N. Hansen, and C. Igel. Recombination for learning strategy parameters in the MO-CMA-ES. In M. Ehrgott, C. Fonseca, X. Gandibleux, J.-K. Hao, and M. Sevaux, editors, Fifth International Conference on Evolutionary Multi-Criterion Optimization (EMO 2009), volume 5467 of LNCS. Springer-Verlag, 2009. Google ScholarDigital Library
- L. While. A new analysis of the LebMeasure algorithm for calculating hypervolume. In C. A. Coello Coello, E. Zitzler, and A. Hernandez Aguirre, editors, Third International Conference on Evolutionary Multi-Criterion Optimization (EMO 2005), volume 3410 of LNCS, pages 326--340. Springer-Verlag, 2005. Google ScholarDigital Library
- E. Zitzler. Hypervolume metric calculation. Technical report, Computer Engineering and Networks Laboratory, (TIK), Swiss Federal Institute of Technology (ETH) Zurich, 2001.Google Scholar
- E. Zitzler, K. Deb, and L. Thiele. Comparison of multiobjective evolutionary algorithms: Empirical results. Evolutionary Computation, 8(2):173--195, 2000. Google ScholarDigital Library
- E. Zitzler and L. Thiele. Multiobjective optimization using evolutionary algorithms - a comparative case study. In A. E. Eiben, T. Bäck, M. Schoenauer, and H.-P. Schwefel, editors, Fifth International Conference on Parallel Problem Solving from Nature (PPSN-V), volume 1498 of LNCS, pages 292--301. Springer-Verlag, 1998. Google ScholarDigital Library
- E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca, and V. Grunert da Fonseca. Performance assessment of multiobjective optimizers: An analysis and review. IEEE Transactions on Evolutionary Computation, 7(2):117--132, 2003. Google ScholarDigital Library
Index Terms
- Improved step size adaptation for the MO-CMA-ES
Recommendations
Covariance Matrix Adaptation for Multi-objective Optimization
The covariancematrix adaptation evolution strategy (CMA-ES) is one of themost powerful evolutionary algorithms for real-valued single-objective optimization. In this paper, we develop a variant of the CMA-ES for multi-objective optimization (MOO). We ...
Functionally specialized CMA-ES: a modification of CMA-ES based on the specialization of the functions of covariance matrix adaptation and step size adaptation
GECCO '08: Proceedings of the 10th annual conference on Genetic and evolutionary computationThis paper aims the design of efficient and effective optimization algorithms for function optimization. This paper presents a new framework of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Recent studies modified the ...
Unbounded Population MO-CMA-ES for the Bi-Objective BBOB Test Suite
GECCO '16 Companion: Proceedings of the 2016 on Genetic and Evolutionary Computation Conference CompanionThe unbounded population multi-objective covariance matrix adaptation evolution strategy~(UP-MO-CMA-ES) aims at maximizing the total hypervolume covered by all evaluated points. It adds all non-dominated solutions found to its population and employs ...
Comments