PyPop7’s Online Documentation for Black-Box Optimization (BBO)
PyPop7 is a Pure-PYthon library of POPulation-based OPtimization for single-objective, real-parameter, black-box problems. Its design goal is to provide a unified interface and elegant implementations for Black-Box Optimizers (BBO), particularly population-based optimizers, in order to facilitate research repeatability, benchmarking of BBO, and also real-world applications.
Specifically, for alleviating the curse of dimensionality of BBO, the primary focus of PyPop7 is to cover their State-Of-The-Art (SOTA) implementations for Large-Scale Optimization (LSO) as much as possible, though many of their medium/small-scale versions and variants are also included here (some mainly for theoretical purposes, some mainly for benchmarking purposes, and some mainly for application purposes on medium/low dimensions).
Note
This open-source Python library for continuous BBO is still under active maintenance. In the future, we plan to add some NEW BBO and some SOTA versions of existing BBO families, in order to make this library as fresh as possible. Any suggestions, extensions, improvements, usages, and tests to this open-source Python library are highly welcomed!
Quick Start
In practice, three simple steps are enough to utilize the potential of PyPop7 for black-box optimization (BBO):
See this online documentation for details about multiple installation ways.
Define/code your own objective/cost function (to be minimized) for the optimization problem at hand:
1>>> import numpy as np # for numerical computation, which is also the computing engine of pypop7 2>>> def rosenbrock(x): # notorious test function in the optimization community 3... return 100.0*np.sum(np.square(x[1:] - np.square(x[:-1]))) + np.sum(np.square(x[:-1] - 1.0)) 4>>> ndim_problem = 1000 # problem dimension 5>>> problem = {'fitness_function': rosenbrock, # cost function to be minimized 6... 'ndim_problem': ndim_problem, # problem dimension 7... 'lower_boundary': -5.0*np.ones((ndim_problem,)), # lower search boundary 8... 'upper_boundary': 5.0*np.ones((ndim_problem,))} # upper search boundary
See this online documentation for details about the problem definition. Note that any maximization problem can be easily transformed into the minimization problem via simply negating it.
Run one or more black-box optimizers (BBO) from pypop7 on the above optimization problem:
1>>> from pypop7.optimizers.es.lmmaes import LMMAES # choose any optimizer you prefer in this library 2>>> options = {'fitness_threshold': 1e-10, # terminate when the best-so-far fitness is lower than 1e-10 3... 'max_runtime': 3600, # terminate when the actual runtime exceeds 1 hour (i.e. 3600 seconds) 4... 'seed_rng': 0, # seed of random number generation (which must be set for repeatability) 5... 'x': 4.0*np.ones((ndim_problem,)), # initial mean of search/mutation distribution 6... 'sigma': 3.0, # initial global step-size of search distribution (to be fine-tuned) 7... 'verbose': 500} 8>>> lmmaes = LMMAES(problem, options) # initialize the optimizer (a unified interface for all optimizers) 9>>> results = lmmaes.optimize() # run its (time-consuming) search process 10>>> # print the best-so-far fitness and used function evaluations returned by the used black-box optimizer 11>>> print(results['best_so_far_y'], results['n_function_evaluations']) 129.948e-11 2973386
See this online documentation for details about the optimizer setting. Please refer to the following contents for all the BBO currently available in this increasingly popular open-source Python library.
- Installation of PyPop7
- Design Philosophy of PyPop7
- User Guide
- Online Tutorials
- Evolution Strategies (ES)
ES
- Limited Memory Covariance Matrix Adaptation (LMCMA)
- Mixture Model-based Evolution Strategy (MMES)
- Fast Covariance Matrix Adaptation Evolution Strategy (FCMAES)
- Diagonal Decoding Covariance Matrix Adaptation (DDCMA)
- Limited Memory Matrix Adaptation Evolution Strategy (LMMAES)
- Rank-M Evolution Strategy (RMES)
- Rank-One Evolution Strategy (R1ES)
- Projection-based Covariance Matrix Adaptation (VKDCMA)
- Linear Covariance Matrix Adaptation (VDCMA)
- Limited Memory Covariance Matrix Adaptation Evolution Strategy (LMCMAES)
- Fast Matrix Adaptation Evolution Strategy (FMAES)
- Matrix Adaptation Evolution Strategy (MAES)
- Cholesky-CMA-ES 2016 (CCMAES2016)
- (1+1)-Active-CMA-ES 2015 (OPOA2015)
- (1+1)-Active-CMA-ES 2010 (OPOA2010)
- Cholesky-CMA-ES 2009 (CCMAES2009)
- (1+1)-Cholesky-CMA-ES 2009 (OPOC2009)
- Separable Covariance Matrix Adaptation Evolution Strategy (SEPCMAES)
- (1+1)-Cholesky-CMA-ES 2006 (OPOC2006)
- Covariance Matrix Adaptation Evolution Strategy (CMAES)
- Self-Adaptation Matrix Adaptation Evolution Strategy (SAMAES)
- Self-Adaptation Evolution Strategy (SAES)
- Cumulative Step-size Adaptation Evolution Strategy (CSAES)
- Derandomized Self-Adaptation Evolution Strategy (DSAES)
- Schwefel’s Self-Adaptation Evolution Strategy (SSAES)
- Rechenberg’s (1+1)-Evolution Strategy (RES)
- Natural Evolution Strategies (NES)
- Estimation of Distribution Algorithms (EDA)
- Cross-Entropy Method (CEM)
- Differential Evolution (DE)
- Particle Swarm Optimizer (PSO)
PSO
- Cooperative Coevolving Particle Swarm Optimizer (CCPSO2)
- Incremental Particle Swarm Optimizer (IPSO)
- Comprehensive Learning Particle Swarm Optimizer (CLPSO)
- Cooperative Particle Swarm Optimizer (CPSO)
- Standard Particle Swarm Optimizer with a Local topology (SPSOL)
- Standard Particle Swarm Optimizer with a global topology (SPSO)
- Cooperative Coevolution (CC)
- Simulated Annealing (SA)
- Genetic Algorithms (GA)
- Evolutionary Programming (EP)
- Direct Search (DS)
- Random Search (RS)
- Bayesian Optimization (BO)
- Black-Box Optimization (BBO)
- Development Guide
- Software Summary
- Applications
- Sponsor
- Changing Log