67 research outputs found
Lexicase selection in Learning Classifier Systems
The lexicase parent selection method selects parents by considering
performance on individual data points in random order instead of using a
fitness function based on an aggregated data accuracy. While the method has
demonstrated promise in genetic programming and more recently in genetic
algorithms, its applications in other forms of evolutionary machine learning
have not been explored. In this paper, we investigate the use of lexicase
parent selection in Learning Classifier Systems (LCS) and study its effect on
classification problems in a supervised setting. We further introduce a new
variant of lexicase selection, called batch-lexicase selection, which allows
for the tuning of selection pressure. We compare the two lexicase selection
methods with tournament and fitness proportionate selection methods on binary
classification problems. We show that batch-lexicase selection results in the
creation of more generic rules which is favorable for generalization on future
data. We further show that batch-lexicase selection results in better
generalization in situations of partial or missing data.Comment: Genetic and Evolutionary Computation Conference, 201
A Survey of Genetic Improvement Search Spaces
Genetic Improvement (GI) uses automated search to improve existing software. Most GI work has focused on empirical studies that successfully apply GI to improve software's running time, fix bugs, add new features, etc. There has been little research into why GI has been so successful. For example, genetic programming has been the most commonly applied search algorithm in GI. Is genetic programming the best choice for GI? Initial attempts to answer this question have explored GI's mutation search space. This paper summarises the work published on this question to date
CMA-ES with Learning Rate Adaptation: Can CMA-ES with Default Population Size Solve Multimodal and Noisy Problems?
The covariance matrix adaptation evolution strategy (CMA-ES) is one of the
most successful methods for solving black-box continuous optimization problems.
One practically useful aspect of the CMA-ES is that it can be used without
hyperparameter tuning. However, the hyperparameter settings still have a
considerable impact, especially for difficult tasks such as solving multimodal
or noisy problems. In this study, we investigate whether the CMA-ES with
default population size can solve multimodal and noisy problems. To perform
this investigation, we develop a novel learning rate adaptation mechanism for
the CMA-ES, such that the learning rate is adapted so as to maintain a constant
signal-to-noise ratio. We investigate the behavior of the CMA-ES with the
proposed learning rate adaptation mechanism through numerical experiments, and
compare the results with those obtained for the CMA-ES with a fixed learning
rate. The results demonstrate that, when the proposed learning rate adaptation
is used, the CMA-ES with default population size works well on multimodal
and/or noisy problems, without the need for extremely expensive learning rate
tuning.Comment: Nominated for the best paper of GECCO'23 ENUM Track. We have
corrected the error of Eq.(7
Identifying Vulnerabilities of Industrial Control Systems using Evolutionary Multiobjective Optimisation
In this paper we propose a novel methodology to assist in identifying
vulnerabilities in a real-world complex heterogeneous industrial control
systems (ICS) using two evolutionary multiobjective optimisation (EMO)
algorithms, NSGA-II and SPEA2. Our approach is evaluated on a well known
benchmark chemical plant simulator, the Tennessee Eastman (TE) process model.
We identified vulnerabilities in individual components of the TE model and then
made use of these to generate combinatorial attacks to damage the safety of the
system, and to cause economic loss. Results were compared against random
attacks, and the performance of the EMO algorithms were evaluated using
hypervolume, spread and inverted generational distance (IGD) metrics. A defence
against these attacks in the form of a novel intrusion detection system was
developed, using a number of machine learning algorithms. Designed approach was
further tested against the developed detection methods. Results demonstrate
that EMO algorithms are a promising tool in the identification of the most
vulnerable components of ICS, and weaknesses of any existing detection systems
in place to protect the system. The proposed approach can be used by control
and security engineers to design security aware control, and test the
effectiveness of security mechanisms, both during design, and later during
system operation.Comment: 25 page
Evolutionary Construction of Convolutional Neural Networks
Neuro-Evolution is a field of study that has recently gained significantly
increased traction in the deep learning community. It combines deep neural
networks and evolutionary algorithms to improve and/or automate the
construction of neural networks. Recent Neuro-Evolution approaches have shown
promising results, rivaling hand-crafted neural networks in terms of accuracy.
A two-step approach is introduced where a convolutional autoencoder is created
that efficiently compresses the input data in the first step, and a
convolutional neural network is created to classify the compressed data in the
second step. The creation of networks in both steps is guided by by an
evolutionary process, where new networks are constantly being generated by
mutating members of a collection of existing networks. Additionally, a method
is introduced that considers the trade-off between compression and information
loss of different convolutional autoencoders. This is used to select the
optimal convolutional autoencoder from among those evolved to compress the data
for the second step. The complete framework is implemented, tested on the
popular CIFAR-10 data set, and the results are discussed. Finally, a number of
possible directions for future work with this particular framework in mind are
considered, including opportunities to improve its efficiency and its
application in particular areas
How Fast Can We Play Tetris Greedily With Rectangular Pieces?
Consider a variant of Tetris played on a board of width and infinite
height, where the pieces are axis-aligned rectangles of arbitrary integer
dimensions, the pieces can only be moved before letting them drop, and a row
does not disappear once it is full. Suppose we want to follow a greedy
strategy: let each rectangle fall where it will end up the lowest given the
current state of the board. To do so, we want a data structure which can always
suggest a greedy move. In other words, we want a data structure which maintains
a set of rectangles, supports queries which return where to drop the
rectangle, and updates which insert a rectangle dropped at a certain position
and return the height of the highest point in the updated set of rectangles. We
show via a reduction to the Multiphase problem [P\u{a}tra\c{s}cu, 2010] that on
a board of width , if the OMv conjecture [Henzinger et al., 2015]
is true, then both operations cannot be supported in time
simultaneously. The reduction also implies polynomial bounds from the 3-SUM
conjecture and the APSP conjecture. On the other hand, we show that there is a
data structure supporting both operations in time on
boards of width , matching the lower bound up to a factor.Comment: Correction of typos and other minor correction
- …