Welcome to PyPop7’s Documentation!¶
PyPop7 is a Pure-PYthon library of POPulation-based OPtimization for single-objective, real-parameter, black-box problems. Its main goal is to provide a unified interface and elegant implementations for Black-Box Optimizers (BBO), particularly population-based optimizers, in order to facilitate research repeatability, benchmarking of BBO, and also real-world applications.
More specifically, for alleviating the notorious curse of dimensionality of BBO, the primary focus of PyPop7 is to cover their State-Of-The-Art (SOTA) implementations for Large-Scale Optimization (LSO), though many of their medium/small-scale versions and variants are also included here (mainly for theoretical or benchmarking purposes).

Note
This open-source Python library for continuous black-box optimization is under active development (from 2021 to now).
Quick Start
In practice, three simple steps are enough to utilize the optimization power of PyPop7:
Use pip to automatically install pypop7:
$ pip install pypop7
Define your own objective (cost) function (to be minimized) for the optimization problem at hand:
1>>> import numpy as np # for numerical computation, which is also the computing engine of pypop7 2>>> def rosenbrock(x): # notorious test (benchmarking) function in the optimization community 3... return 100*np.sum(np.power(x[1:] - np.power(x[:-1], 2), 2)) + np.sum(np.power(x[:-1] - 1, 2)) 4>>> ndim_problem = 1000 # problem dimension 5>>> problem = {'fitness_function': rosenbrock, # cost function to be minimized 6... 'ndim_problem': ndim_problem, # problem dimension 7... 'lower_boundary': -5*np.ones((ndim_problem,)), # lower search boundary 8... 'upper_boundary': 5*np.ones((ndim_problem,))} # upper search boundary
Run one or more black-box optimizers from pypop7 on the given optimization problem:
1>>> from pypop7.optimizers.es.lmmaes import LMMAES # choose any optimizer you prefer in this library 2>>> options = {'fitness_threshold': 1e-10, # terminate when the best-so-far fitness is lower than 1e-10 3... 'max_runtime': 3600, # terminate when the actual runtime exceeds 1 hour (i.e. 3600 seconds) 4... 'seed_rng': 0, # seed of random number generation (which must be set for repeatability) 5... 'x': 4*np.ones((ndim_problem,)), # initial mean of search distribution 6... 'sigma': 0.3, # initial global step-size of search distribution 7... 'verbose': 500} 8>>> lmmaes = LMMAES(problem, options) # initialize the optimizer (a unified interface for all optimizers) 9>>> results = lmmaes.optimize() # run its (time-consuming) search process 10>>> # print the best-so-far fitness and used function evaluations returned by the used black-box optimizer 11>>> print(results['best_so_far_y'], results['n_function_evaluations']) 129.8774e-11 3928055
Contents:
- Installation of PyPop7
- Design Philosophy
- User Guide
- Tutorials
- Evolution Strategies (ES)
ES
- Limited Memory Covariance Matrix Adaptation (LMCMA)
- Mixture Model-based Evolution Strategy (MMES)
- Fast Covariance Matrix Adaptation Evolution Strategy (FCMAES)
- Diagonal Decoding Covariance Matrix Adaptation (DDCMA)
- Limited Memory Matrix Adaptation Evolution Strategy (LMMAES)
- Rank-M Evolution Strategy (RMES)
- Rank-One Evolution Strategy (R1ES)
- Projection-based Covariance Matrix Adaptation (VKDCMA)
- Linear Covariance Matrix Adaptation (VDCMA)
- Limited Memory Covariance Matrix Adaptation Evolution Strategy (LMCMAES)
- Fast Matrix Adaptation Evolution Strategy (FMAES)
- Matrix Adaptation Evolution Strategy (MAES)
- Cholesky-CMA-ES 2016 (CCMAES2016)
- (1+1)-Active-CMA-ES 2015 (OPOA2015)
- (1+1)-Active-CMA-ES 2010 (OPOA2010)
- Cholesky-CMA-ES 2009 (CCMAES2009)
- (1+1)-Cholesky-CMA-ES 2009 (OPOC2009)
- Separable Covariance Matrix Adaptation Evolution Strategy (SEPCMAES)
- (1+1)-Cholesky-CMA-ES 2006 (OPOC2006)
- Covariance Matrix Adaptation Evolution Strategy (CMAES)
- Self-Adaptation Matrix Adaptation Evolution Strategy (SAMAES)
- Self-Adaptation Evolution Strategy (SAES)
- Cumulative Step-size Adaptation Evolution Strategy (CSAES)
- Derandomized Self-Adaptation Evolution Strategy (DSAES)
- Schwefel’s Self-Adaptation Evolution Strategy (SSAES)
- Rechenberg’s (1+1)-Evolution Strategy (RES)
- Natural Evolution Strategies (NES)
- Estimation of Distribution Algorithms (EDA)
- Cross-Entropy Method (CEM)
- Differential Evolution (DE)
- Particle Swarm Optimizer (PSO)
PSO
- Cooperative Coevolving Particle Swarm Optimizer (CCPSO2)
- Incremental Particle Swarm Optimizer (IPSO)
- Comprehensive Learning Particle Swarm Optimizer (CLPSO)
- Cooperative Particle Swarm Optimizer (CPSO)
- Standard Particle Swarm Optimizer with a Local topology (SPSOL)
- Standard Particle Swarm Optimizer with a global topology (SPSO)
- Cooperative Coevolution (CC)
- Simulated Annealing (SA)
- Genetic Algorithms (GA)
- Evolutionary Programming (EP)
- Direct Search (DS)
- Random Search (RS)
- Bayesian Optimization (BO)
- Black-Box Optimization (BBO)
- Development Guide
- Software Summary
- Applications
- Sponsor