Gaussian Smoothing (GS)¶
- class pypop7.optimizers.rs.gs.GS(problem, options)¶
Gaussian Smoothing (GS).
Note
In 2017, Nesterov published state-of-the-art theoretical results on convergence rate of GS for a class of convex functions in the gradient-free context (see Foundations of Computational Mathematics).
- Parameters:
problem (dict) –
- problem arguments with the following common settings (keys):
’fitness_function’ - objective function to be minimized (func),
’ndim_problem’ - number of dimensionality (int),
’upper_boundary’ - upper boundary of search range (array_like),
’lower_boundary’ - lower boundary of search range (array_like).
options (dict) –
- optimizer options with the following common settings (keys):
’max_function_evaluations’ - maximum of function evaluations (int, default: np.Inf),
’max_runtime’ - maximal runtime to be allowed (float, default: np.Inf),
’seed_rng’ - seed for random number generation needed to be explicitly set (int);
- and with the following particular settings (keys):
’n_individuals’ - number of individuals/samples (int, default: 100),
’lr’ - learning rate (float, default: 0.001),
’c’ - factor of finite-difference gradient estimate (float, default: 0.1),
’x’ - initial (starting) point (array_like),
if not given, it will draw a random sample from the uniform distribution whose search range is bounded by problem[‘lower_boundary’] and problem[‘upper_boundary’].
Examples
Use the optimizer to minimize the well-known test function Rosenbrock:
1>>> import numpy 2>>> from pypop7.benchmarks.base_functions import rosenbrock # function to be minimized 3>>> from pypop7.optimizers.rs.gs import GS 4>>> problem = {'fitness_function': rosenbrock, # define problem arguments 5... 'ndim_problem': 100, 6... 'lower_boundary': -2*numpy.ones((100,)), 7... 'upper_boundary': 2*numpy.ones((100,))} 8>>> options = {'max_function_evaluations': 10000*101, # set optimizer options 9... 'seed_rng': 2022, 10... 'n_individuals': 10, 11... 'c': 0.1, 12... 'lr': 0.000001} 13>>> gs = GS(problem, options) # initialize the optimizer class 14>>> results = gs.optimize() # run the optimization process 15>>> # return the number of used function evaluations and found best-so-far fitness 16>>> print(f"GS: {results['n_function_evaluations']}, {results['best_so_far_y']}") 17GS: 1010000, 99.99696650242736
For its correctness checking of coding, refer to this code-based repeatability report for more details.
- c¶
factor of finite-difference gradient estimate.
- Type:
float
- lr¶
learning rate of (estimated) gradient update.
- Type:
float
- n_individuals¶
number of individuals/samples.
- Type:
int
- x¶
initial (starting) point.
- Type:
array_like
References
Gao, K. and Sener, O., 2022, June. Generalizing Gaussian Smoothing for Random Search. In International Conference on Machine Learning (pp. 7077-7101). PMLR. https://proceedings.mlr.press/v162/gao22f.html https://icml.cc/media/icml-2022/Slides/16434.pdf
Nesterov, Y. and Spokoiny, V., 2017. Random gradient-free minimization of convex functions. Foundations of Computational Mathematics, 17(2), pp.527-566. https://link.springer.com/article/10.1007/s10208-015-9296-2