Nelder-Mead (NM)

class pypop7.optimizers.ds.nm.NM(problem, options)[source]

Nelder-Mead simplex method (NM).

Note

NM is perhaps the best-known and most-cited Direct (Pattern) Search method from 1965, till now. As pointed out by Wright (Member of National Academy of Engineering 1997), “In addition to concerns about the lack of theory, mainstream optimization researchers were not impressed by the Nelder-Mead method’s practical performance, which can be appallingly poor.” However, today NM is still widely used to optimize relatively low-dimensional objective functions. It is highly recommended to first attempt other more advanced methods for large-scale black-box optimization.

AKA downhill simplex method, polytope algorithm.

Parameters:
  • problem (dict) –

    problem arguments with the following common settings (keys):
    • ’fitness_function’ - objective function to be minimized (func),

    • ’ndim_problem’ - number of dimensionality (int),

    • ’upper_boundary’ - upper boundary of search range (array_like),

    • ’lower_boundary’ - lower boundary of search range (array_like).

  • options (dict) –

    optimizer options with the following common settings (keys):
    • ’max_function_evaluations’ - maximum of function evaluations (int, default: np.Inf),

    • ’max_runtime’ - maximal runtime to be allowed (float, default: np.Inf),

    • ’seed_rng’ - seed for random number generation needed to be explicitly set (int);

    and with the following particular settings (keys):
    • ’sigma’ - initial global step-size (float, default: 1.0),

    • ’x’ - initial (starting) point (array_like),

      • if not given, it will draw a random sample from the uniform distribution whose search range is bounded by problem[‘lower_boundary’] and problem[‘upper_boundary’].

    • ’alpha’ - reflection factor (float, default: 1.0),

    • ’beta’ - contraction factor (float, default: 0.5),

    • ’gamma’ - expansion factor (float, default: 2.0),

    • ’shrinkage’ - shrinkage factor (float, default: 0.5).

Examples

Use the optimizer to minimize the well-known test function Rosenbrock:

 1>>> import numpy
 2>>> from pypop7.benchmarks.base_functions import rosenbrock  # function to be minimized
 3>>> from pypop7.optimizers.ds.nm import NM
 4>>> problem = {'fitness_function': rosenbrock,  # define problem arguments
 5...            'ndim_problem': 2,
 6...            'lower_boundary': -5*numpy.ones((2,)),
 7...            'upper_boundary': 5*numpy.ones((2,))}
 8>>> options = {'max_function_evaluations': 5000,  # set optimizer options
 9...            'seed_rng': 2022,
10...            'x': 3*numpy.ones((2,)),
11...            'sigma': 0.1,
12...            'verbose': 500}
13>>> nm = NM(problem, options)  # initialize the optimizer class
14>>> results = nm.optimize()  # run the optimization process
15>>> # return the number of function evaluations and best-so-far fitness
16>>> print(f"NM: {results['n_function_evaluations']}, {results['best_so_far_y']}")
17NM: 5000, 1.3337953711044745e-13

For its correctness checking of coding, refer to this code-based repeatability report for more details.

alpha

reflection factor.

Type:

float

beta

contraction factor.

Type:

float

gamma

expansion factor.

Type:

float

shrinkage

shrinkage factor.

Type:

float

sigma

initial global step-size.

Type:

float

x

initial (starting) point.

Type:

array_like

References

Singer, S. and Nelder, J., 2009. Nelder-mead algorithm. Scholarpedia, 4(7), p.2928. http://var.scholarpedia.org/article/Nelder-Mead_algorithm

Press, W.H., Teukolsky, S.A., Vetterling, W.T. and Flannery, B.P., 2007. Numerical recipes: The art of scientific computing. Cambridge University Press. http://numerical.recipes/

Senn, S. and Nelder, J., 2003. A conversation with John Nelder. Statistical Science, pp.118-131. https://www.jstor.org/stable/3182874

Wright, M.H., 1996. Direct search methods: Once scorned, now respectable. Pitman Research Notes in Mathematics Series, pp.191-208. https://nyuscholars.nyu.edu/en/publications/direct-search-methods-once-scorned-now-respectable

Dean, W.K., Heald, K.J. and Deming, S.N., 1975. Simplex optimization of reaction yields. Science, 189(4205), pp.805-806. https://www.science.org/doi/10.1126/science.189.4205.805

Nelder, J.A. and Mead, R., 1965. A simplex method for function minimization. The Computer Journal, 7(4), pp.308-313. https://academic.oup.com/comjnl/article-abstract/7/4/308/354237