Design Philosophy of PyPop7

Given a large number of black-box optimizers (BBO) which still keep increasing almost every month, we need some (possibly) widely acceptable criteria to select from them, as presented below in details:

Respect for Beauty (Elegance)

From the problem-solving perspective, we empirically prefer to choose the best optimizer for the black-box optimization problem at hand. For the new problem, however, the best optimizer is often unknown in advance (when without a prior knowledge). As a rule of thumb, we need to compare a (often small) set of available/well-known optimizers and finally choose the best one according to some predefined performance criteria. From the academic research perspective, however, we prefer so-called beautiful optimizers, though always keeping the No Free Lunch Theorems in mind. Typically, the beauty of one optimizer comes from the following attractive features: model novelty (e.g., useful logical concepts and design frameworks), competitive performance on at least one class of problems, theoretical insights (e.g., guarantee of global convergence and rate of convergence on some problem classes), clarity/simplicity for understanding and implementations, and well-recognized repeatability/reproducibility.

If you find some BBO which is missed in this library to meet the above standard, welcome to launch issues or pulls. We will consider it to be included in the PyPop7 library as soon as possible, if possible. Note that any superficial imitation to well-established optimizers (i.e., Old Wine in a New Bottle) will be NOT considered here. Sometimes, several very complex optimizers could obtain the top rank on some competitions consisting of only artificially-constructed benchmark functions. However, these optimizers may become over-skilled on these artifacts. In our opinions, a good optimizer should be elegant and generalizable. If there is no persuasive/successful applications reported for it, we will not consider any very complex optimizer in this library, in order to avoid the possible repeatability and overfitting issues.

Note

“If there is a single dominant theme in this …, it is that practical methods of numerical computation can be simultaneously efficient, clever, and –important– clear.”—Press, W.H., Teukolsky, S.A., Vetterling, W.T. and Flannery, B.P., 2007. Numerical recipes: The art of scientific computing. Cambridge University Press.

Respect for Diversity

Given the universality of black-box optimization in science and engineering, different research communities have designed different optimizers. The type and number of optimizers are continuing to increase as the more powerful optimizers are always desirable for new and more challenging applications. On the one hand, some of these methods may share more or less similarities. On the other hand, they may also show significant differences (w.r.t. motivations / objectives / implementations / communities / practitioners). Therefore, we hope to cover such a diversity from different research communities such as artificial intelligence/machine learning (particularly evolutionary computation, swarm intelligence, and zeroth-order optimization), mathematical optimization/programming (particularly derivative-free/global optimization), operations research / management science (metaheuristics), automatic control (random search), electronic engineering, physics, chemistry, open-source software, and many others.

Note

“The theory of evolution by natural selection explains the adaptedness and diversity of the world solely materialistically”.[Mayr, 2009, Scientific American].

To cover recent advances on population-based BBO as widely as possible, We have actively maintained a companion project to collect related papers on some top-tier journals and conferences for more than 3 years. We wish that this open companion project could provide an increasingly reliable literature reference as the base of our library.

Respect for Originality

For each black-box optimizer included in PyPop7, we expect to give its original/representative reference (sometimes also including its good implementations/improvements). If you find some important references missed here, please do NOT hesitate to contact us (and we will be happy to add it). Furthermore, if you identify some mistake regarding originality, we first apologize for our (possible) mistake and will correct it timely within this open-source project.

Note

“It is both enjoyable and educational to hear the ideas directly from the creators”.—Hennessy, J.L. and Patterson, D.A., 2019. Computer architecture: A quantitative approach (Sixth Edition). Elsevier.

Respect for Repeatability

For randomized search which is adopted by most population-based optimizers, properly controlling randomness is very crucial to repeat numerical experiments. Here we follow the official Random Sampling suggestions from NumPy. In other worlds, you should explicitly set the random seed for each optimizer. For more discussions about repeatability/benchmarking from AI/ML, evolutionary computation (EC), swarm intelligence (SI), and metaheuristics communities, please refer to the following papers, to name a few:

Finally, we expect to see more interesting discussions about the beauty of BBO. For any new/missed BBO, we provide a unified API interface to help freely add them if they satisfy the above design philosophy well. See the development guide for more details.