915 research outputs found
Self-adaptation of Mutation Rates in Non-elitist Populations
The runtime of evolutionary algorithms (EAs) depends critically on their
parameter settings, which are often problem-specific. Automated schemes for
parameter tuning have been developed to alleviate the high costs of manual
parameter tuning. Experimental results indicate that self-adaptation, where
parameter settings are encoded in the genomes of individuals, can be effective
in continuous optimisation. However, results in discrete optimisation have been
less conclusive. Furthermore, a rigorous runtime analysis that explains how
self-adaptation can lead to asymptotic speedups has been missing. This paper
provides the first such analysis for discrete, population-based EAs. We apply
level-based analysis to show how a self-adaptive EA is capable of fine-tuning
its mutation rate, leading to exponential speedups over EAs using fixed
mutation rates.Comment: To appear in the Proceedings of the 14th International Conference on
Parallel Problem Solving from Nature (PPSN
Improved Runtime Bounds for the Univariate Marginal Distribution Algorithm via Anti-Concentration
Unlike traditional evolutionary algorithms which produce offspring via
genetic operators, Estimation of Distribution Algorithms (EDAs) sample
solutions from probabilistic models which are learned from selected
individuals. It is hoped that EDAs may improve optimisation performance on
epistatic fitness landscapes by learning variable interactions. However, hardly
any rigorous results are available to support claims about the performance of
EDAs, even for fitness functions without epistasis. The expected runtime of the
Univariate Marginal Distribution Algorithm (UMDA) on OneMax was recently shown
to be in by Dang and Lehre
(GECCO 2015). Later, Krejca and Witt (FOGA 2017) proved the lower bound
via an involved drift analysis.
We prove a bound, given some restrictions
on the population size. This implies the tight bound when , matching the runtime
of classical EAs. Our analysis uses the level-based theorem and
anti-concentration properties of the Poisson-Binomial distribution. We expect
that these generic methods will facilitate further analysis of EDAs.Comment: 19 pages, 1 figur
Investigating possible causal relations among physical, chemical and biological variables across regions in the Gulf of Maine
We examine potential causal relations between ecosystem variables in four regions of the Gulf of Maine under two major assumptions: (i) a causal cyclic variable will precede, or lead, its effect variable; e.g., a peak (through) in the causal variable will come before a peak (through) in the effect variable. (ii) If physical variables determine regional ecosystem properties, then independent clusters of observations of physical, biological and interaction variables from the same stations will show similar patterns. We use the leading–lagging-strength method to establish leading strength and potential causality, and we use principal component analysis, to establish if regions differ in their ecological characteristics. We found that several relationships for physical and chemical variables were significant, and consistent with ‘‘common knowledge’’ of causal relations. In contrast, relationships that included biological variables differed among regions. In spite of these findings, we found that physical and chemical characteristics of near shore and pelagic regions of the Gulf of Maine translate into unique biological assemblages and unique physical–biologi- cal interaction
A Parameterized Complexity Analysis of Bi-level Optimisation with Evolutionary Algorithms
Bi-level optimisation problems have gained increasing interest in the field
of combinatorial optimisation in recent years. With this paper, we start the
runtime analysis of evolutionary algorithms for bi-level optimisation problems.
We examine two NP-hard problems, the generalised minimum spanning tree problem
(GMST), and the generalised travelling salesman problem (GTSP) in the context
of parameterised complexity.
For the generalised minimum spanning tree problem, we analyse the two
approaches presented by Hu and Raidl (2012) with respect to the number of
clusters that distinguish each other by the chosen representation of possible
solutions. Our results show that a (1+1) EA working with the spanning nodes
representation is not a fixed-parameter evolutionary algorithm for the problem,
whereas the global structure representation enables to solve the problem in
fixed-parameter time. We present hard instances for each approach and show that
the two approaches are highly complementary by proving that they solve each
other's hard instances very efficiently.
For the generalised travelling salesman problem, we analyse the problem with
respect to the number of clusters in the problem instance. Our results show
that a (1+1) EA working with the global structure representation is a
fixed-parameter evolutionary algorithm for the problem
Parallel black-box complexity with tail bounds
We propose a new black-box complexity model for search algorithms evaluating λ search points in parallel. The parallel unary unbiased black-box complexity gives lower bounds on the number of function evaluations every parallel unary unbiased black-box algorithm needs to optimise a given problem. It captures the inertia caused by offspring populations in evolutionary algorithms and the total computational effort in parallel metaheuristics. We present complexity results for LeadingOnes and OneMax. Our main result is a general performance limit: we prove that on every function every λ-parallel unary unbiased algorithm needs at least a certain number of evaluations (a function of problem size and λ) to find any desired target set of up to exponential size, with an overwhelming probability. This yields lower bounds for the typical optimisation time on unimodal and multimodal problems, for the time to find any local optimum, and for the time to even get close to any optimum. The power and versatility of this approach is shown for a wide range of illustrative problems from combinatorial optimisation. Our performance limits can guide parameter choice and algorithm design; we demonstrate the latter by presenting an optimal λ-parallel algorithm for OneMax that uses parallelism most effectively
Monetary policy and stability during six periods in US economic history: 1959–2008: a novel, nonlinear monetary policy rule
We investigate the monetary policy of the Federal Reserve Board during six periods in US economic
history 1959–2008. In particular, we examine the Fed’s response to changes in three guiding variables:
inflation, π, unemployment, U, and industrial production, y, during periods with low and high economic
stability. We identify separate responses for the Fed’s change in interest rate depending upon (i) the current
rate, FF, and the guiding variables’ level below or above their average values and (ii) recent movements in
inflation and unemployment. The change in rate, FF, can then be calculated. We identify policies that both
increased and decreased economic stability
- …