128,495 research outputs found
Optimal Parameter Choices Through Self-Adjustment: Applying the 1/5-th Rule in Discrete Settings
While evolutionary algorithms are known to be very successful for a broad
range of applications, the algorithm designer is often left with many
algorithmic choices, for example, the size of the population, the mutation
rates, and the crossover rates of the algorithm. These parameters are known to
have a crucial influence on the optimization time, and thus need to be chosen
carefully, a task that often requires substantial efforts. Moreover, the
optimal parameters can change during the optimization process. It is therefore
of great interest to design mechanisms that dynamically choose best-possible
parameters. An example for such an update mechanism is the one-fifth success
rule for step-size adaption in evolutionary strategies. While in continuous
domains this principle is well understood also from a mathematical point of
view, no comparable theory is available for problems in discrete domains.
In this work we show that the one-fifth success rule can be effective also in
discrete settings. We regard the ~GA proposed in
[Doerr/Doerr/Ebel: From black-box complexity to designing new genetic
algorithms, TCS 2015]. We prove that if its population size is chosen according
to the one-fifth success rule then the expected optimization time on
\textsc{OneMax} is linear. This is better than what \emph{any} static
population size can achieve and is asymptotically optimal also among
all adaptive parameter choices.Comment: This is the full version of a paper that is to appear at GECCO 201
A self-learning particle swarm optimizer for global optimization problems
Copyright @ 2011 IEEE. All Rights Reserved. This article was made available through the Brunel Open Access Publishing Fund.Particle swarm optimization (PSO) has been shown as an effective tool for solving global optimization problems. So far, most PSO algorithms use a single learning pattern for all particles, which means that all particles in a swarm use the same strategy. This monotonic learning pattern may cause the lack of intelligence for a particular particle, which makes it unable to deal with different complex situations. This paper presents a novel algorithm, called self-learning particle swarm optimizer (SLPSO), for global optimization problems. In SLPSO, each particle has a set of four strategies to cope with different situations in the search space. The cooperation of the four strategies is implemented by an adaptive learning framework at the individual level, which can enable a particle to choose the optimal strategy according to its own local fitness landscape. The experimental study on a set of 45 test functions and two real-world problems show that SLPSO has a superior performance in comparison with several other peer algorithms.This work was supported by the Engineering and Physical Sciences Research Council of U.K. under Grants EP/E060722/1 and EP/E060722/2
Runtime Analysis of the Genetic Algorithm on Random Satisfiable 3-CNF Formulas
The genetic algorithm, first proposed at GECCO 2013,
showed a surprisingly good performance on so me optimization problems. The
theoretical analysis so far was restricted to the OneMax test function, where
this GA profited from the perfect fitness-distance correlation. In this work,
we conduct a rigorous runtime analysis of this GA on random 3-SAT instances in
the planted solution model having at least logarithmic average degree, which
are known to have a weaker fitness distance correlation.
We prove that this GA with fixed not too large population size again obtains
runtimes better than , which is a lower bound for most
evolutionary algorithms on pseudo-Boolean problems with unique optimum.
However, the self-adjusting version of the GA risks reaching population sizes
at which the intermediate selection of the GA, due to the weaker
fitness-distance correlation, is not able to distinguish a profitable offspring
from others. We show that this problem can be overcome by equipping the
self-adjusting GA with an upper limit for the population size. Apart from
sparse instances, this limit can be chosen in a way that the asymptotic
performance does not worsen compared to the idealistic OneMax case. Overall,
this work shows that the GA can provably have a good
performance on combinatorial search and optimization problems also in the
presence of a weaker fitness-distance correlation.Comment: An extended abstract of this report will appear in the proceedings of
the 2017 Genetic and Evolutionary Computation Conference (GECCO 2017
Credit Assignment in Adaptive Evolutionary Algorithms
In this paper, a new method for assigning credit to search\ud
operators is presented. Starting with the principle of optimizing\ud
search bias, search operators are selected based on an ability to\ud
create solutions that are historically linked to future generations.\ud
Using a novel framework for defining performance\ud
measurements, distributing credit for performance, and the\ud
statistical interpretation of this credit, a new adaptive method is\ud
developed and shown to outperform a variety of adaptive and\ud
non-adaptive competitors
Penalized Likelihood Methods for Estimation of Sparse High Dimensional Directed Acyclic Graphs
Directed acyclic graphs (DAGs) are commonly used to represent causal
relationships among random variables in graphical models. Applications of these
models arise in the study of physical, as well as biological systems, where
directed edges between nodes represent the influence of components of the
system on each other. The general problem of estimating DAGs from observed data
is computationally NP-hard, Moreover two directed graphs may be observationally
equivalent. When the nodes exhibit a natural ordering, the problem of
estimating directed graphs reduces to the problem of estimating the structure
of the network. In this paper, we propose a penalized likelihood approach that
directly estimates the adjacency matrix of DAGs. Both lasso and adaptive lasso
penalties are considered and an efficient algorithm is proposed for estimation
of high dimensional DAGs. We study variable selection consistency of the two
penalties when the number of variables grows to infinity with the sample size.
We show that although lasso can only consistently estimate the true network
under stringent assumptions, adaptive lasso achieves this task under mild
regularity conditions. The performance of the proposed methods is compared to
alternative methods in simulated, as well as real, data examples.Comment: 19 pages, 8 figure
- …