242 research outputs found
Multi-objective gene-pool optimal mixing evolutionary algorithms
In this paper, by constructing the Multi-objective Gene-pool Optimal Mixing Evolutionary Algorithm (MO-GOMEA), we pinpoint key features for scalable multi objective optimizers. First, an elitist archive is beneficial for keeping track of non-dominated solutions. Second, clustering can be crucial if different parts of the Pareto-optimal front need to be handled separately. Next, an efficient linkage learning procedure with a lean linkage model is required to capture the underlying dependencies among decision variables. It is also important that the optimizers can effectively exploit the learned linkage relations to generate new offspring solutions, steering the search toward promising regions in the search space
Use of Particle Multi-Swarm Optimization for Handling Tracking Problems
As prior work, several multiple particle swarm optimizers with sensors, that is, MPSOS, MPSOIWS, MCPSOS, and HPSOS, were proposed for handling tracking problems. Due to more efficient handling of these problems, in this chapter we innovate the strategy of information sharing (IS) to these existing methods and propose four new search methods that are multiple particle swarm optimizers with sensors and information sharing (MPSOSIS), multiple particle swarm optimizers with inertia weight with sensors and information sharing (MPSOIWSIS), multiple canonical particle swarm optimizers with sensors and information sharing (MCPSOSIS), and hybrid particle swarm optimizers with sensors and information sharing (HPSOSIS). Based on the added strategy of information sharing, the search ability and performance of these methods are improved, and it is possible to track a moving target promptly. Therefore, the search framework of particle multi-swarm optimization (PMSO) is established. For investigating search ability and characteristics of the proposed methods, several computer experiments are carried out to handle the tracking problems of constant speed I type, variable speed II type, and variable speed III type, which are a set of benchmark tracking problems. Owing to analyze experimental results, we reveal the outstanding search performance and tracking ability of the proposed search methods
Robustness for Free: Quality-Diversity Driven Discovery of Agile Soft Robotic Gaits
Soft robotics aims to develop robots able to adapt their behavior across a
wide range of unstructured and unknown environments. A critical challenge of
soft robotic control is that nonlinear dynamics often result in complex
behaviors hard to model and predict. Typically behaviors for mobile soft robots
are discovered through empirical trial and error and hand-tuning. More
recently, optimization algorithms such as Genetic Algorithms (GA) have been
used to discover gaits, but these behaviors are often optimized for a single
environment or terrain, and can be brittle to unplanned changes to terrain. In
this paper we demonstrate how Quality Diversity Algorithms, which search of a
range of high-performing behaviors, can produce repertoires of gaits that are
robust to changing terrains. This robustness significantly out-performs that of
gaits produced by a single objective optimization algorithm.Comment: 6 pages, submitted to IEEE RoboSof
Hybridizing Non-dominated Sorting Algorithms: Divide-and-Conquer Meets Best Order Sort
Many production-grade algorithms benefit from combining an asymptotically
efficient algorithm for solving big problem instances, by splitting them into
smaller ones, and an asymptotically inefficient algorithm with a very small
implementation constant for solving small subproblems. A well-known example is
stable sorting, where mergesort is often combined with insertion sort to
achieve a constant but noticeable speed-up.
We apply this idea to non-dominated sorting. Namely, we combine the
divide-and-conquer algorithm, which has the currently best known asymptotic
runtime of , with the Best Order Sort algorithm, which
has the runtime of but demonstrates the best practical performance
out of quadratic algorithms.
Empirical evaluation shows that the hybrid's running time is typically not
worse than of both original algorithms, while for large numbers of points it
outperforms them by at least 20%. For smaller numbers of objectives, the
speedup can be as large as four times.Comment: A two-page abstract of this paper will appear in the proceedings
companion of the 2017 Genetic and Evolutionary Computation Conference (GECCO
2017
Discovering rules for rule-based machine learning with the help of novelty search
Automated prediction systems based on machine learning (ML) are employed in practical applications with increasing frequency and stakeholders demand explanations of their decisions. ML algorithms that learn accurate sets of rules, such as learning classifier systems (LCSs), produce transparent and human-readable models by design. However, whether such models can be effectively used, both for predictions and analyses, strongly relies on the optimal placement and selection of rules (in ML this task is known as model selection). In this article, we broaden a previous analysis on a variety of techniques to efficiently place good rules within the search space based on their local prediction errors as well as their generality. This investigation is done within a specific pre-existing LCS, named SupRB, where the placement of rules and the selection of good subsets of rules are strictly separatedâin contrast to other LCSs where these tasks sometimes blend. We compare two baselines, random search and
-evolution strategy (ES), with six novelty search variants: three novelty-/fitness weighing variants and for each of those two differing approaches on the usage of the archiving mechanism. We find that random search is not sufficient and sensible criteria, i.e., error and generality, are indeed needed. However, we cannot confirm that the more complicated-to-explain novelty search variants would provide better results than -ES which allows a good balance between low error and low complexity in the resulting models
Predicting Good Configurations for GitHub and Stack Overflow Topic Models
Software repositories contain large amounts of textual data, ranging from
source code comments and issue descriptions to questions, answers, and comments
on Stack Overflow. To make sense of this textual data, topic modelling is
frequently used as a text-mining tool for the discovery of hidden semantic
structures in text bodies. Latent Dirichlet allocation (LDA) is a commonly used
topic model that aims to explain the structure of a corpus by grouping texts.
LDA requires multiple parameters to work well, and there are only rough and
sometimes conflicting guidelines available on how these parameters should be
set. In this paper, we contribute (i) a broad study of parameters to arrive at
good local optima for GitHub and Stack Overflow text corpora, (ii) an
a-posteriori characterisation of text corpora related to eight programming
languages, and (iii) an analysis of corpus feature importance via per-corpus
LDA configuration. We find that (1) popular rules of thumb for topic modelling
parameter configuration are not applicable to the corpora used in our
experiments, (2) corpora sampled from GitHub and Stack Overflow have different
characteristics and require different configurations to achieve good model fit,
and (3) we can predict good configurations for unseen corpora reliably. These
findings support researchers and practitioners in efficiently determining
suitable configurations for topic modelling when analysing textual data
contained in software repositories.Comment: to appear as full paper at MSR 2019, the 16th International
Conference on Mining Software Repositorie
Analytic Design Techniques for MPT Antenna Arrays
Solar Power Satellites (SPS) represent one of the most interesting technological opportunities to provide large scale, environmentally clean and renewable energy to the Earth [1]â[3]. A fundamental and critical component of SPSs is the Microwave Power Transmission (MPT) system, which is responsible for the delivery of the collected solar power to the ground rectenna [2]. Towards this end, the MPT array must exhibit a narrow main beam width (), a high beam efficiency (BWBE), and a low peak sidelobe level (). Moreover, reduced realization costs and weights are also necessary [3]. To reach these contrasting goals, several design techniques have been investigated including random methods [4] and hybrid deterministicârandom approaches [2][3]. On the contrary, wellâestablished design tools based on stochastic optimizers [5][6] are difficult to be employed, due to their high computational costs when dealing with large arrays as those of interest in SPS [3]
- âŠ