Util Functions for BBO

In this open-source Python module, we have provided some common utils functions for BBO, as presented below. The main purpose of these utils is to simplify the typical development and experiment procedures of BBO.

Plot 2-D Fitness Landscape

pypop7.benchmarks.utils.generate_xyz(func, x, y, num=200)[source]

Generate necessary data before plotting a 2D contour of the fitness landscape.

Parameters:
  • func (func) – benchmarking function.

  • x (list) – x-axis range.

  • y (list) – y-axis range.

  • num (int) – number of samples in each of x- and y-axis range (200 by default).

Returns:

A (x, y, z) tuple where x, y, and z are data points in x-axis, y-axis, and function values, respectively.

Return type:

tuple

Examples

1>>> from pypop7.benchmarks import base_functions
2>>> from pypop7.benchmarks.utils import generate_xyz
3>>> x_, y_, z_ = generate_xyz(base_functions.sphere, [0.0, 1.0], [0.0, 1.0], num=2)
4>>> print(x_.shape, y_.shape, z_.shape)
pypop7.benchmarks.utils.plot_contour(func, x, y, levels=None, num=200, is_save=False)[source]

Plot a 2-D contour of the fitness landscape.

Parameters:
  • func (func) – benchmarking function.

  • x (list) – x-axis range.

  • y (list) – y-axis range.

  • levels (int or list) – number of contour lines or a list of contours.

  • num (int) – number of samples in each of x- and y-axis range (200 by default).

  • is_save (bool) – whether or not to save the generated figure in the local folder (False by default).

Return type:

An online figure.

Examples

1>>> from pypop7.benchmarks.utils import plot_contour
2>>> from pypop7.benchmarks.rotated_functions import generate_rotation_matrix
3>>> from pypop7.benchmarks.rotated_functions import ellipsoid
4>>> # plot ill-condition and non-separability
5>>> generate_rotation_matrix(ellipsoid, 2, 72)
6>>> contour_levels = [0, 5e5, 8e6, 4e7, 8e7, 1.15e8, 1.42e8, 1.62e8, 1.78e8, 1.85e8, 2e8]
7>>> plot_contour(ellipsoid, [-10.0, 10.0], [-10.0, 10.0], contour_levels)

The online figure generated in the above Example is shown below:

_images/contour_ellipsoid.png

Plot 3-D Fitness Landscape

pypop7.benchmarks.utils.plot_surface(func, x, y, num=200, is_save=False)[source]

Plot a 3-D surface of the fitness landscape.

Parameters:
  • func (func) – benchmarking function.

  • x (list) – x-axis range.

  • y (list) – y-axis range.

  • num (int) – number of samples in each of x- and y-axis range (200 by default).

  • is_save (bool) – whether or not to save the generated figure in the local folder (False by default).

Return type:

An online figure.

Examples

1>>> from pypop7.benchmarks.utils import plot_surface
2>>> from pypop7.benchmarks.rotated_functions import ellipsoid
3>>> from pypop7.benchmarks.rotated_functions import generate_rotation_matrix
4>>> # plot ill-condition and non-separability
5>>> generate_rotation_matrix(ellipsoid, 2, 72)
6>>> plot_surface(ellipsoid, [-10.0, 10.0], [-10.0, 10.0], 7)

The online figure generated in the above Example is shown below:

_images/surface_ellipsoid.png

Save Optimization Results via Object Serialization

For serialization of complex object, we use the standard library (pickle) of Python. Please refer to Python wiki for an introduction.

pypop7.benchmarks.utils.save_optimization(results, algo, func, dim, exp, folder='pypop7_benchmarks_lso')[source]

Save optimization results (in pickle form) via object serialization.

Note

By default, the local file name to be saved is given in the following form: Algo-{}_Func-{}_Dim-{}_Exp-{}.pickle in the local folder pypop7_benchmarks_lso.

Parameters:
  • results (dict) – optimization results returned by any optimizer.

  • algo (str) – name of algorithm to be used.

  • func (str) – name of the fitness function to be minimized.

  • dim (str or int) – dimensionality of the fitness function to be minimized.

  • exp (str or int) – index of each independent experiment to be run.

  • folder (str) – local folder under the working space obtained via the pwd() command (pypop7_benchmarks_lso by default).

Return type:

A local file stored in the working space (which can be obtained via the pwd() command).

Examples

 1>>> import numpy  # engine for numerical computing
 2>>> from pypop7.benchmarks.base_functions import rosenbrock  # function to be minimized
 3>>> from pypop7.optimizers.rs.prs import PRS
 4>>> from pypop7.benchmarks.utils import save_optimization
 5>>> ndim = 2  # number of dimensionality
 6>>> problem = {'fitness_function': rosenbrock,  # to define problem arguments
 7...            'ndim_problem': ndim,
 8...            'lower_boundary': -5.0 * numpy.ones((ndim,)),
 9...            'upper_boundary': 5.0 * numpy.ones((ndim,))}
10>>> options = {'max_function_evaluations': 5000,  # to set optimizer options
11...            'seed_rng': 2022}  # global step-size may need to be tuned for optimality
12>>> prs = PRS(problem, options)  # to initialize the black-box optimizer class
13>>> res = prs.optimize()  # to run its optimization/evolution process
14>>> save_optimization(res, PRS.__name__, rosenbrock.__name__, ndim, 1)

Please check the following local file in your working space obtained via the pwd() command:

  • pypop7_benchmarks_lso/Algo-PRS_Func-rosenbrock_Dim-2_Exp-1.pickle

pypop7.benchmarks.utils.read_optimization(folder, algo, func, dim, exp)[source]

Read optimization results (in pickle form) after object serialization.

Note

By default, the local file name to be saved is given in the following form: Algo-{}_Func-{}_Dim-{}_Exp-{}.pickle in the local folder.

Parameters:
  • folder (str) – local folder under the working space obtained via the pwd() command.

  • algo (str) – name of algorithm to be used.

  • func (str) – name of the fitness function to be minimized.

  • dim (str or int) – dimensionality of the fitness function to be minimized.

  • exp (str or int) – index of each independent experiment to be run.

Returns:

results – optimization results returned by any optimizer.

Return type:

dict

Examples

 1>>> import numpy  # engine for numerical computing
 2>>> from pypop7.benchmarks.base_functions import rosenbrock  # function to be minimized
 3>>> from pypop7.optimizers.rs.prs import PRS
 4>>> from pypop7.benchmarks.utils import save_optimization, read_optimization
 5>>> ndim = 2  # number of dimensionality
 6>>> problem = {'fitness_function': rosenbrock,  # to define problem arguments
 7...            'ndim_problem': ndim,
 8...            'lower_boundary': -5.0 * numpy.ones((ndim,)),
 9...            'upper_boundary': 5.0 * numpy.ones((ndim,))}
10>>> options = {'max_function_evaluations': 5000,  # to set optimizer options
11...            'seed_rng': 2022}  # global step-size may need to be tuned for optimality
12>>> prs = PRS(problem, options)  # to initialize the black-box optimizer class
13>>> res = prs.optimize()  # to run its optimization/evolution process
14>>> save_optimization(res, PRS.__name__, rosenbrock.__name__, ndim, 1)
15>>> res = read_optimization('pypop7_benchmarks_lso', PRS.__name__, rosenbrock.__name__, ndim, 1)
16>>> print(res)

Check Optimization Results

pypop7.benchmarks.utils.check_optimization(problem, options, results)[source]

Check optimization results according to problem arguments and optimizer options.

Parameters:
  • problem (dict) – problem arguments.

  • options (dict) – optimizer options.

  • results (dict) – optimization results generated by any black-box optimizer.

Return type:

A detailed checking report.

Examples

1>>> import numpy  # engine for numerical computing
2>>> from pypop7.benchmarks.utils import check_optimization
3>>> pro = {'lower_boundary': [-5.0, -7.0, -3.0],
4...        'upper_boundary': [5.0, 7.0, 3.0]}
5>>> opt = {'max_function_evaluations': 7777777}
6>>> res = {'n_function_evaluations': 7777777,
7...        'best_so_far_x': numpy.zeros((3,))}
8>>> check_optimization(pro, opt, res)

Plot Convergence Curve via Matplotlib

Here we use Matplotlib to plot a convergence curve figure for (only) one black-box optimizer. Please refer to its official website for an introduction.

pypop7.benchmarks.utils.plot_convergence_curve(algo, func, dim, exp=1, results=None, folder='pypop7_benchmarks_lso')[source]

Plot the convergence curve of final optimization results obtained by one optimizer.

Note

By default, the local file name to be saved is given in the following form: Algo-{}_Func-{}_Dim-{}_Exp-{}.pickle in the local folder pypop7_benchmarks_lso.

Parameters:
  • algo (str) – name of algorithm to be used.

  • func (str) – name of the fitness function to be minimized.

  • dim (str or int) – dimensionality of the fitness function to be minimized.

  • exp (str or int) – index of experiments to be run.

  • results (dict) – optimization results returned by any optimizer.

  • folder (str) – local folder under the working space (pypop7_benchmarks_lso by default).

Examples

 1>>> import numpy  # engine for numerical computing
 2>>> from pypop7.benchmarks.base_functions import rosenbrock  # function to be minimized
 3>>> from pypop7.optimizers.pso.spso import SPSO
 4>>> from pypop7.benchmarks.utils import plot_convergence_curve
 5>>> problem = {'fitness_function': rosenbrock,  # to define problem arguments
 6...            'ndim_problem': 2,
 7...            'lower_boundary': -5.0*numpy.ones((2,)),
 8...            'upper_boundary': 5.0*numpy.ones((2,))}
 9>>> options = {'max_function_evaluations': 5000,  # to set optimizer options
10...            'saving_fitness': 1,
11...            'seed_rng': 2022}
12>>> spso = SPSO(problem, options)  # to initialize the black-box optimizer class
13>>> res = spso.optimize()  # to run the optimization process
14>>> plot_convergence_curve('SPSO', rosenbrock.__name__, 2, results=res)

The online figure generated by the above Example is presented below:

_images/rosenbrock.png

Compare Multiple Black-Box Optimizers

Here we use Matplotlib to plot a convergence curve figure for multiple black-box optimizers.

pypop7.benchmarks.utils.plot_convergence_curves(algos, func, dim, exp=1, results=None, folder='pypop7_benchmarks_lso')[source]

Plot convergence curves of final optimization results obtained by multiple optimizers.

Note

By default, the local file name to be saved is given in the following form: Algo-{}_Func-{}_Dim-{}_Exp-{}.pickle in the local folder pypop7_benchmarks_lso.

Parameters:
  • algos (list of class) – a list of optimizer classes to be used.

  • func (str) – name of the fitness function to be minimized.

  • dim (str or int) – dimensionality of the fitness function to be minimized.

  • exp (str or int) – index of experiments to be run.

  • results (list of dict) – optimization results returned by any optimizer.

  • folder (str) – local folder under the working space (pypop7_benchmarks_lso by default).

Examples

 1>>> import numpy  # engine for numerical computing
 2>>> from pypop7.benchmarks.base_functions import rosenbrock  # function to be minimized
 3>>> from pypop7.optimizers.rs.prs import PRS
 4>>> from pypop7.optimizers.pso.spso import SPSO
 5>>> from pypop7.optimizers.de.cde import CDE
 6>>> from pypop7.optimizers.eda.umda import UMDA
 7>>> from pypop7.optimizers.es.cmaes import CMAES
 8>>> from pypop7.benchmarks.utils import plot_convergence_curves
 9>>> problem = {'fitness_function': rosenbrock,  # to define problem arguments
10>>>            'ndim_problem': 2,
11>>>            'lower_boundary': -5.0*numpy.ones((2,)),
12>>>            'upper_boundary': 5.0*numpy.ones((2,))}
13>>> options = {'max_function_evaluations': 5000,  # to set optimizer options
14>>>            'saving_fitness': 1,
15>>>            'sigma': 3.0,
16>>>            'seed_rng': 2022}
17>>> res = []
18>>> for Optimizer in [PRS, SPSO, CDE, UMDA, CMAES]:
19>>>     optimizer = Optimizer(problem, options)  # to initialize the black-box optimizer class
20>>>     res.append(optimizer.optimize())  # to run the optimization process
21>>> plot_convergence_curves([PRS, SPSO, CDE, UMDA, CMAES], rosenbrock.__name__, 2, results=res)

The online figure generated by the above Example is presented below:

_images/rosenbrock-optimizers.png

Accelerate Computation via Numba

For some computationally-expensive operations, we use Numba to accelerate computation, if possible:

pypop7.benchmarks.utils.cholesky_update(rm, z, downdate)[source]

Cholesky update of rank-one.

Parameters:
  • rm ((N, N) ndarray) – 2D input data from which the triangular part will be used to read the Cholesky factor.

  • z ((N,) ndarray) – 1D update/downdate vector.

  • downdate (bool) – False indicates an update while True indicates a downdate (False by default).

Returns:

D – Cholesky factor.

Return type:

(N, N) ndarray

For example, for the rank-one update, its runtime comparison with vs without Numba is presented below:

_images/runtime.png