Random Hill Climber (RHC)¶
- class pypop7.optimizers.rs.rhc.RHC(problem, options)¶
Random (stochastic) Hill Climber (RHC).
Note
Currently RHC only supports normally-distributed random sampling during optimization. It often suffers from slow convergence for large-scale black-box optimization (LSBBO), owing to its relatively limited exploration ability originating from its individual-based sampling strategy. Therefore, it is highly recommended to first attempt more advanced (e.g. population-based) methods for LSBBO.
AKA “stochastic local search (steepest ascent or greedy search)”—[Murphy., 2022].
- Parameters:
problem (dict) –
- problem arguments with the following common settings (keys):
’fitness_function’ - objective function to be minimized (func),
’ndim_problem’ - number of dimensionality (int),
’upper_boundary’ - upper boundary of search range (array_like),
’lower_boundary’ - lower boundary of search range (array_like).
options (dict) –
- optimizer options with the following common settings (keys):
’max_function_evaluations’ - maximum of function evaluations (int, default: np.Inf),
’max_runtime’ - maximal runtime to be allowed (float, default: np.Inf),
’seed_rng’ - seed for random number generation needed to be explicitly set (int);
- and with the following particular settings (keys):
’sigma’ - initial global step-size (float),
’x’ - initial (starting) point (array_like),
if not given, it will draw a random sample from the uniform distribution whose search range is bounded by problem[‘lower_boundary’] and problem[‘upper_boundary’], when init_distribution is 1. Otherwise, standard normal distributed random sampling is used.
’init_distribution’ - random sampling distribution for starting-point initialization (int, default: 1). Only when x is not set explicitly, it will be used.
1: uniform distributed random sampling only for starting-point initialization,
0: standard normal distributed random sampling only for starting-point initialization.
Examples
Use the optimizer to minimize the well-known test function Rosenbrock:
1>>> import numpy 2>>> from pypop7.benchmarks.base_functions import rosenbrock # function to be minimized 3>>> from pypop7.optimizers.rs.rhc import RHC 4>>> problem = {'fitness_function': rosenbrock, # define problem arguments 5... 'ndim_problem': 2, 6... 'lower_boundary': -5*numpy.ones((2,)), 7... 'upper_boundary': 5*numpy.ones((2,))} 8>>> options = {'max_function_evaluations': 5000, # set optimizer options 9... 'seed_rng': 2022, 10... 'x': 3*numpy.ones((2,)), 11... 'sigma': 0.1} 12>>> rhc = RHC(problem, options) # initialize the optimizer class 13>>> results = rhc.optimize() # run the optimization process 14>>> # return the number of used function evaluations and found best-so-far fitness 15>>> print(f"RHC: {results['n_function_evaluations']}, {results['best_so_far_y']}") 16RHC: 5000, 7.13722829962456e-05
For its correctness checking of coding, refer to this code-based repeatability report for more details.
- init_distribution¶
random sampling distribution for starting-point initialization.
- Type:
int
- sigma¶
global step-size (fixed during optimization).
- Type:
float
- x¶
initial (starting) point.
- Type:
array_like
References
https://probml.github.io/pml-book/book2.html (See CHAPTER 6.7 Derivative-free optimization)
Russell, S. and Norvig P., 2021. Artificial intelligence: A modern approach (Global Edition). Pearson Education. http://aima.cs.berkeley.edu/ (See CHAPTER 4: SEARCH IN COMPLEX ENVIRONMENTS)
Hoos, H.H. and Stützle, T., 2004. Stochastic local search: Foundations and applications. Elsevier. https://www.elsevier.com/books/stochastic-local-search/hoos/978-1-55860-872-6
https://github.com/pybrain/pybrain/blob/master/pybrain/optimization/hillclimber.py