12 research outputs found
Generalizing and Unifying Gray-box Combinatorial Optimization Operators.
Gray-box optimization leverages the information available about the mathematical structure of an optimization problem to design efficient search operators. Efficient hill climbers and crossover operators have been proposed in the domain of pseudo-Boolean optimization and also in some permutation problems. However, there is no general rule on how to design these efficient operators in different representation domains. This paper proposes a general framework that encompasses all known gray-box operators for combinatorial optimization problems. The framework is general enough to shed light on the design of new efficient operators for new problems and representation domains. We also unify the proofs of efficiency for gray-box hill climbers and crossovers and show that the mathematical property explaining the speed-up of gray-box crossover operators, also explains the efficient identification of improving moves in gray-box hill climbers. We illustrate the power of the new framework by proposing an efficient hill climber and crossover for two related permutation problems: the Linear Ordering Problem and the Single Machine Total Weighted Tardiness Problem.This research is partially funded by project PID 2020-116727RB- I00 (HUmove) funded by MCIN/AEI/ 10.13039/501100011033; TAILOR ICT-48 Network (No 952215) funded by EU Horizon 2020 research and innovation programme; Junta de Andalucia, Spain, under contract QUAL21 010UMA; and the University of Malaga (PAR 4/2023). This work is also partially funded by a National Science Foundation (NSF) grant to D. Whitley, Award Number: 1908866
Neuroevolution in Games: State of the Art and Open Challenges
This paper surveys research on applying neuroevolution (NE) to games. In
neuroevolution, artificial neural networks are trained through evolutionary
algorithms, taking inspiration from the way biological brains evolved. We
analyse the application of NE in games along five different axes, which are the
role NE is chosen to play in a game, the different types of neural networks
used, the way these networks are evolved, how the fitness is determined and
what type of input the network receives. The article also highlights important
open research challenges in the field.Comment: - Added more references - Corrected typos - Added an overview table
(Table 1
Accelerating Evolution Through Gene Masking and Distributed Search
In building practical applications of evolutionary computation (EC), two
optimizations are essential. First, the parameters of the search method need to
be tuned to the domain in order to balance exploration and exploitation
effectively. Second, the search method needs to be distributed to take
advantage of parallel computing resources. This paper presents BLADE (BLAnket
Distributed Evolution) as an approach to achieving both goals simultaneously.
BLADE uses blankets (i.e., masks on the genetic representation) to tune the
evolutionary operators during the search, and implements the search through
hub-and-spoke distribution. In the paper, (1) the blanket method is formalized
for the (1 + 1)EA case as a Markov chain process. Its effectiveness is then
demonstrated by analyzing dominant and subdominant eigenvalues of stochastic
matrices, suggesting a generalizable theory; (2) the fitness-level theory is
used to analyze the distribution method; and (3) these insights are verified
experimentally on three benchmark problems, showing that both blankets and
distribution lead to accelerated evolution. Moreover, a surprising synergy
emerges between them: When combined with distribution, the blanket approach
achieves more than -fold speedup with clients in some cases. The work
thus highlights the importance and potential of optimizing evolutionary
computation in practical applications
Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks
Biological plastic neural networks are systems of extraordinary computational
capabilities shaped by evolution, development, and lifetime learning. The
interplay of these elements leads to the emergence of adaptive behavior and
intelligence. Inspired by such intricate natural phenomena, Evolved Plastic
Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed
plastic neural networks with a large variety of dynamics, architectures, and
plasticity rules: these artificial systems are composed of inputs, outputs, and
plastic components that change in response to experiences in an environment.
These systems may autonomously discover novel adaptive algorithms, and lead to
hypotheses on the emergence of biological adaptation. EPANNs have seen
considerable progress over the last two decades. Current scientific and
technological advances in artificial neural networks are now setting the
conditions for radically new approaches and results. In particular, the
limitations of hand-designed networks could be overcome by more flexible and
innovative solutions. This paper brings together a variety of inspiring ideas
that define the field of EPANNs. The main methods and results are reviewed.
Finally, new opportunities and developments are presented
Recommended from our members
Discovering gated recurrent neural network architectures
Reinforcement Learning agent networks with memory are a key component in solving POMDP tasks.
Gated recurrent networks such as those composed of Long Short-Term
Memory (LSTM) nodes have recently been used to improve
state of the art in many supervised sequential processing tasks such as speech
recognition and machine translation. However, scaling them to deep
memory tasks in reinforcement learning domain is challenging because of sparse and deceptive
reward function. To address this challenge first, a new secondary optimization objective is introduced
that maximizes the information (Info-max) stored in
the LSTM network. Results indicate that when combined with neuroevolution, Info-max can discover powerful
LSTM-based memory solutions that outperform traditional
RNNs. Next, for the supervised learning tasks, neuroevolution techniques are employed
to design new LSTM architectures. Such architectural variations include
discovering new pathways between the recurrent layers as well as designing new gated
recurrent nodes. This dissertation proposes evolution of a tree-based
encoding of the gated memory nodes, and shows that it makes
it possible to explore new variations more effectively than other
methods. The method discovers nodes with multiple recurrent paths
and multiple memory cells, which lead to significant improvement
in the standard language modeling benchmark task. The dissertation also
shows how the search process can be speeded up by training an
LSTM network to estimate performance of candidate structures, and
by encouraging exploration of novel solutions. Thus, evolutionary
design of complex neural network structures promises to improve
performance of deep learning architectures beyond human ability
to do so.Computer Science
Recommended from our members
Evolutionary neural architecture search for deep learning
Deep neural networks (DNNs) have produced state-of-the-art results in many benchmarks and problem domains.
However, the success of DNNs depends on the proper configuration of its architecture and hyperparameters.
DNNs are often not used to their full potential because it is difficult to determine what architectures and hyperparameters should be used.
While several approaches have been proposed, computational complexity of searching large design spaces makes them impractical for large modern DNNs.
This dissertation introduces an efficient evolutionary algorithm (EA) for simultaneous optimization of DNN architecture and hyperparameters.
It builds upon extensive past research of evolutionary optimization of neural network structure.
Various improvements to the core algorithm are introduced, including:
(1) discovering DNN architectures of arbitrary complexity;
(1) generating modular, repetitive modules commonly seen in state-of-the-art DNNs;
(3) extending to the multitask learning and multiobjective optimization domains;
(4) maximizing performance and reducing wasted computation through asynchronous evaluations.
Experimental results in image classification, image captioning, and multialphabet character recognition show that the approach is able to evolve networks that are competitive with or even exceed hand-designed networks.
Thus, the method enables an automated and streamlined process to optimize DNN architectures for a given problem and can be widely applied to solve harder tasks.Computer Science
Learning Strategies for Evolved Co-operating Multi-Agent Teams in Pursuit Domain
This study investigates how genetic programming (GP) can be effectively used in a
multi-agent system to allow agents to learn to communicate. Using the predator-prey
scenario and a co-operative learning strategy, communication protocols are compared
as multiple predator agents learn the meaning of commands in order to achieve their
common goal of first finding, and then tracking prey. This work is divided into three
parts. The first part uses a simple GP language in the Pursuit Domain Development
Kit (PDP) to investigate several communication protocols, and compares the predators'
ability to find and track prey when the prey moves both linearly and randomly.
The second part, again in the PDP environment, enhances the GP language and fitness
measure in search of a better solution for when the prey moves randomly. The
third part uses the Ms. Pac-Man Development Toolkit to test how the enhanced GP
language performs in a game environment. The outcome of each part of this study
reveals emergent behaviours in different forms of message sending patterns. The results
from Part 1 reveal a general synchronization behaviour emerging from simple
message passing among agents. Additionally, the results show a learned behaviour
in the best result which resembles the behaviour of guards and reinforcements found
in popular stealth video games. The outcomes from Part 2 reveal an emergent message
sending pattern such that one agent is designated as the "sending" agent and
the remaining agents are designated as "receiving" agents. Evolved agents in the Ms.
Pac-Man simulator show an emergent sending pattern in which there is one agent that
sends messages when it is in view of the prey. In addition, it is shown that evolved
agents in both Part 2 and Part 3 are able to learn a language. For example, "sending"
agents are able to make decisions about when and what type of command to send
and "receiving" agents are able to associate the intended meaning to commands
Evolutionary Computation 2020
Intelligent optimization is based on the mechanism of computational intelligence to refine a suitable feature model, design an effective optimization algorithm, and then to obtain an optimal or satisfactory solution to a complex problem. Intelligent algorithms are key tools to ensure global optimization quality, fast optimization efficiency and robust optimization performance. Intelligent optimization algorithms have been studied by many researchers, leading to improvements in the performance of algorithms such as the evolutionary algorithm, whale optimization algorithm, differential evolution algorithm, and particle swarm optimization. Studies in this arena have also resulted in breakthroughs in solving complex problems including the green shop scheduling problem, the severe nonlinear problem in one-dimensional geodesic electromagnetic inversion, error and bug finding problem in software, the 0-1 backpack problem, traveler problem, and logistics distribution center siting problem. The editors are confident that this book can open a new avenue for further improvement and discoveries in the area of intelligent algorithms. The book is a valuable resource for researchers interested in understanding the principles and design of intelligent algorithms