Evolution Strategies (ES)

class pypop7.optimizers.es.es.ES(problem, options)[source]

Evolution Strategies (ES).

This is the abstract class for all ES classes. Please use any of instantiated subclasses of ES to optimize the black-box problem at hand.

Note

ES are a well-established family of randomized population-based search algorithms, proposed by two German students Ingo Rechenberg and Hans-Paul Schwefel (two recipients of IEEE Evolutionary Computation Pioneer Award 2002). One key property of ES is adaptability of strategy parameters, which can significantly accelerate the (local) convergence rate in many cases. Recently, the theoretical foundation of its most representative (modern) version CMA-ES has been in part built on the Information-Geometric Optimization (IGO) framework via invariance principles (IGO was originally inspired by Natural Evolution Strategies).

“Their two most prominent design principles are unbiasedness and adaptive control of parameters of the sample distribution. Parameter control is not always directly inspired by biological evolution, but is an indispensable and central feature of ES. The result of selection and recombination is often deterministic. For recombination, using a single parental centroid has become the most popular approach, because such algorithms are simpler to formalize, easier to analyze, and even perform better in various circumstances as they allow for maximum genetic repair.”—[Hansen et al., 2015]

In 2017, OpenAI designed a scalable ES version (called OpenAI-ES) which obtained competitive performance for deep neural network-based policy search in reinforcement learning.

For some interesting applications of ES, please refer to [SIMULIA > CST Studio Suite > Automatic Optimization (Dassault Systèmes)], [Yang et al., 2024, CVPR (CMU)], [Elfikky et al., 2024, LWC (UCSC + Qualcomm Inc.)], [Martin&Sandholm, 2024 (CMU + Strategy Robot, Inc. + Optimized Markets, Inc. + Strategic Machine, Inc.)], [Reali et al., 2024 (Microsoft Research, Gates Medical Research Institute, Hackensack Meridian Health, etc.)] [Science Robotics-2023], [Nature Medicine-2023], [TMRB-2023], [ACL-2023], [NeurIPS-Workshop-2023], [TVCG-2014], [AIAAJ-2014], [ICML-2009], [Nature-2000], just to name a few.

Parameters:
  • problem (dict) –

    problem arguments with the following common settings (keys):
    • ’fitness_function’ - objective function to be minimized (func),

    • ’ndim_problem’ - number of dimensionality (int),

    • ’upper_boundary’ - upper boundary of search range (array_like),

    • ’lower_boundary’ - lower boundary of search range (array_like).

  • options (dict) –

    optimizer options with the following common settings (keys):
    • ’max_function_evaluations’ - maximum of function evaluations (int, default: np.inf),

    • ’max_runtime’ - maximal runtime to be allowed (float, default: np.inf),

    • ’seed_rng’ - seed for random number generation needed to be explicitly set (int);

    and with the following particular settings (keys):
    • ’n_individuals’ - number of offspring/descendants, aka offspring population size (int),

    • ’n_parents’ - number of parents/ancestors, aka parental population size (int),

    • ’mean’ - initial (starting) point (array_like),

      • if not given, it will draw a random sample from the uniform distribution whose search range is bounded by problem[‘lower_boundary’] and problem[‘upper_boundary’].

    • ’sigma’ - initial global step-size, aka mutation strength (float).

mean

initial (starting) point, aka mean of Gaussian search/sampling/mutation distribution.

Type:

array_like

n_individuals

number of offspring/descendants, aka offspring population size.

Type:

int

n_parents

number of parents/ancestors, aka parental population size.

Type:

int

sigma

global step-size, aka mutation strength (i.e., overall std of Gaussian search distribution).

Type:

float

References

https://homepages.fhv.at/hgb/downloads/ES-Is-Not-Gradient-Follower.pdf

Ollivier, Y., Arnold, L., Auger, A. and Hansen, N., 2017. Information-geometric optimization algorithms: A unifying picture via invariance principles. Journal of Machine Learning Research, 18(18), pp.1-65.

https://blog.otoro.net/2017/10/29/visual-evolution-strategies/

Hansen, N., Arnold, D.V. and Auger, A., 2015. Evolution strategies. In Springer Handbook of Computational Intelligence (pp. 871-898). Springer, Berlin, Heidelberg.

Bäck, T., Foussette, C., & Krause, P. (2013). Contemporary evolution strategies. Berlin: Springer.

http://www.scholarpedia.org/article/Evolution_strategies

Beyer, H.G. and Schwefel, H.P., 2002. Evolution strategies–A comprehensive introduction. Natural Computing, 1(1), pp.3-52.

Rechenberg, I., 2000. Case studies in evolutionary experimentation and computation. Computer Methods in Applied Mechanics and Engineering, 186(2-4), pp.125-140.

Rechenberg, I., 1989. Evolution strategy: Nature’s way of optimization. In Optimization: Methods and Applications, Possibilities and Limitations (pp. 106-126). Springer, Berlin, Heidelberg.

Schwefel, H.P., 1988. Collective intelligence in evolving systems. In Ecodynamics (pp. 95-100). Springer, Berlin, Heidelberg.

Schwefel, H.P., 1984. Evolution strategies: A family of non-linear optimization techniques based on imitating some principles of organic evolution. Annals of Operations Research, 1(2), pp.165-167.

Rechenberg, I., 1984. The evolution strategy. A mathematical model of darwinian evolution. In Synergetics—from Microscopic to Macroscopic Order (pp. 122-132). Springer, Berlin, Heidelberg.

https://visitor-badge.laobi.icu/badge?page_id=Evolutionary-Intelligence.pypop https://visitor-badge.laobi.icu/badge?page_id=Evolutionary-Intelligence.pypop-ES