2,532 research outputs found
ES Is More Than Just a Traditional Finite-Difference Approximator
An evolution strategy (ES) variant based on a simplification of a natural
evolution strategy recently attracted attention because it performs
surprisingly well in challenging deep reinforcement learning domains. It
searches for neural network parameters by generating perturbations to the
current set of parameters, checking their performance, and moving in the
aggregate direction of higher reward. Because it resembles a traditional
finite-difference approximation of the reward gradient, it can naturally be
confused with one. However, this ES optimizes for a different gradient than
just reward: It optimizes for the average reward of the entire population,
thereby seeking parameters that are robust to perturbation. This difference can
channel ES into distinct areas of the search space relative to gradient
descent, and also consequently to networks with distinct properties. This
unique robustness-seeking property, and its consequences for optimization, are
demonstrated in several domains. They include humanoid locomotion, where
networks from policy gradient-based reinforcement learning are significantly
less robust to parameter perturbation than ES-based policies solving the same
task. While the implications of such robustness and robustness-seeking remain
open to further study, this work's main contribution is to highlight such
differences and their potential importance
Evolutionary Design of Numerical Methods: Generating Finite Difference and Integration Schemes by Differential Evolution
Classical and new numerical schemes are generated using evolutionary
computing. Differential Evolution is used to find the coefficients of finite
difference approximations of function derivatives, and of single and multi-step
integration methods. The coefficients are reverse engineered based on samples
from a target function and its derivative used for training. The Runge-Kutta
schemes are trained using the order condition equations. An appealing feature
of the evolutionary method is the low number of model parameters. The
population size, termination criterion and number of training points are
determined in a sensitivity analysis. Computational results show good agreement
between evolved and analytical coefficients. In particular, a new fifth-order
Runge-Kutta scheme is computed which adheres to the order conditions with a sum
of absolute errors of order 10^-14. Execution of the evolved schemes proved the
intended orders of accuracy. The outcome of this study is valuable for future
developments in the design of complex numerical methods that are out of reach
by conventional means.Comment: 19 pages, 7 figures, 10 tables, 4 appendice
Evolutionary Multiobjective Optimization Driven by Generative Adversarial Networks (GANs)
Recently, increasing works have proposed to drive evolutionary algorithms
using machine learning models. Usually, the performance of such model based
evolutionary algorithms is highly dependent on the training qualities of the
adopted models. Since it usually requires a certain amount of data (i.e. the
candidate solutions generated by the algorithms) for model training, the
performance deteriorates rapidly with the increase of the problem scales, due
to the curse of dimensionality. To address this issue, we propose a
multi-objective evolutionary algorithm driven by the generative adversarial
networks (GANs). At each generation of the proposed algorithm, the parent
solutions are first classified into real and fake samples to train the GANs;
then the offspring solutions are sampled by the trained GANs. Thanks to the
powerful generative ability of the GANs, our proposed algorithm is capable of
generating promising offspring solutions in high-dimensional decision space
with limited training data. The proposed algorithm is tested on 10 benchmark
problems with up to 200 decision variables. Experimental results on these test
problems demonstrate the effectiveness of the proposed algorithm
Metaheuristic design of feedforward neural networks: a review of two decades of research
Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era
- …