Simple Random Search (SRS)¶
- class pypop7.optimizers.rs.srs.SRS(problem, options)¶
Simple Random Search (SRS).
Note
SRS is an adaptive random search method, originally designed by Rosenstein and Barto for direct policy search in reinforcement learning. Since it uses a simple individual-based random sampling strategy, it easily suffers from a limited exploration ability for large-scale black-box optimization (LSBBO). Therefore, it is highly recommended to first attempt more advanced (e.g. population-based) methods for LSBBO.
Here we include it mainly for benchmarking purpose.
- Parameters:
problem (dict) –
- problem arguments with the following common settings (keys):
’fitness_function’ - objective function to be minimized (func),
’ndim_problem’ - number of dimensionality (int),
’upper_boundary’ - upper boundary of search range (array_like),
’lower_boundary’ - lower boundary of search range (array_like).
options (dict) –
- optimizer options with the following common settings (keys):
’max_function_evaluations’ - maximum of function evaluations (int, default: np.Inf),
’max_runtime’ - maximal runtime to be allowed (float, default: np.Inf),
’seed_rng’ - seed for random number generation needed to be explicitly set (int);
- and with the following particular settings (keys):
’sigma’ - initial global step-size (float),
’x’ - initial (starting) point (array_like),
if not given, it will draw a random sample from the uniform distribution whose search range is bounded by problem[‘lower_boundary’] and problem[‘upper_boundary’].
’alpha’ - factor of global step-size (float, default: 0.3),
’beta’ - adjustment probability for exploration-exploitation trade-off (float, default: 0.0),
’gamma’ - factor of search decay (float, default: 0.99),
’min_sigma’ - minimum of global step-size (float, default: 0.01).
Examples
Use the optimizer to minimize the well-known test function Rosenbrock:
1>>> import numpy 2>>> from pypop7.benchmarks.base_functions import rosenbrock # function to be minimized 3>>> from pypop7.optimizers.rs.srs import SRS 4>>> problem = {'fitness_function': rosenbrock, # define problem arguments 5... 'ndim_problem': 2, 6... 'lower_boundary': -5*numpy.ones((2,)), 7... 'upper_boundary': 5*numpy.ones((2,))} 8>>> options = {'max_function_evaluations': 5000, # set optimizer options 9... 'seed_rng': 2022, 10... 'x': 3*numpy.ones((2,)), 11... 'sigma': 0.1} 12>>> srs = SRS(problem, options) # initialize the optimizer class 13>>> results = srs.optimize() # run the optimization process 14>>> # return the number of used function evaluations and found best-so-far fitness 15>>> print(f"SRS: {results['n_function_evaluations']}, {results['best_so_far_y']}") 16SRS: 5000, 0.0017821578376762473
For its correctness checking of coding, the code-based repeatability report cannot be provided owing to the lack of its simulation environment in the original paper. Instead, we used the comparison-based strategy to validate its correctness as much as possible (though there still has a risk to be wrong).
- alpha¶
factor of global step-size.
- Type:
float
- beta¶
adjustment probability for exploration-exploitation trade-off.
- Type:
float
- gamma¶
factor of search decay.
- Type:
float
- min_sigma¶
minimum of global step-size.
- Type:
float
- sigma¶
final global step-size (updated during optimization).
- Type:
float
- x¶
initial (starting) point.
- Type:
array_like
References
Rosenstein, M.T. and Grupen, R.A., 2002, May. Velocity-dependent dynamic manipulability. In Proceedings of IEEE International Conference on Robotics and Automation (pp. 2424-2429). IEEE. https://ieeexplore.ieee.org/abstract/document/1013595
Rosenstein, M.T. and Barto, A.G., 2001, August. Robot weightlifting by direct policy search. In International Joint Conference on Artificial Intelligence (pp. 839-846). https://dl.acm.org/doi/abs/10.5555/1642194.1642206