52 research outputs found
Significance-based Estimation-of-Distribution Algorithms
Estimation-of-distribution algorithms (EDAs) are randomized search heuristics
that maintain a probabilistic model of the solution space. This model is
updated from iteration to iteration, based on the quality of the solutions
sampled according to the model. As previous works show, this short-term
perspective can lead to erratic updates of the model, in particular, to
bit-frequencies approaching a random boundary value. Such frequencies take long
to be moved back to the middle range, leading to significant performance
losses.
In order to overcome this problem, we propose a new EDA based on the classic
compact genetic algorithm (cGA) that takes into account a longer history of
samples and updates its model only with respect to information which it
classifies as statistically significant. We prove that this significance-based
compact genetic algorithm (sig-cGA) optimizes the commonly regarded benchmark
functions OneMax, LeadingOnes, and BinVal all in time, a result
shown for no other EDA or evolutionary algorithm so far.
For the recently proposed scGA -- an EDA that tries to prevent erratic model
updates by imposing a bias to the uniformly distributed model -- we prove that
it optimizes OneMax only in a time exponential in the hypothetical population
size . Similarly, we show that the convex search algorithm cannot
optimize OneMax in polynomial time
A Simplified Run Time Analysis of the Univariate Marginal Distribution Algorithm on LeadingOnes
With elementary means, we prove a stronger run time guarantee for the
univariate marginal distribution algorithm (UMDA) optimizing the LeadingOnes
benchmark function in the desirable regime with low genetic drift. If the
population size is at least quasilinear, then, with high probability, the UMDA
samples the optimum within a number of iterations that is linear in the problem
size divided by the logarithm of the UMDA's selection rate. This improves over
the previous guarantee, obtained by Dang and Lehre (2015) via the deep
level-based population method, both in terms of the run time and by
demonstrating further run time gains from small selection rates. With similar
arguments as in our upper-bound analysis, we also obtain the first lower bound
for this problem. Under similar assumptions, we prove that a bound that matches
our upper bound up to constant factors holds with high probability
A Flexible Evolutionary Algorithm With Dynamic Mutation Rate Archive
We propose a new, flexible approach for dynamically maintaining successful
mutation rates in evolutionary algorithms using -bit flip mutations. The
algorithm adds successful mutation rates to an archive of promising rates that
are favored in subsequent steps. Rates expire when their number of unsuccessful
trials has exceeded a threshold, while rates currently not present in the
archive can enter it in two ways: (i) via user-defined minimum selection
probabilities for rates combined with a successful step or (ii) via a
stagnation detection mechanism increasing the value for a promising rate after
the current bit-flip neighborhood has been explored with high probability. For
the minimum selection probabilities, we suggest different options, including
heavy-tailed distributions.
We conduct rigorous runtime analysis of the flexible evolutionary algorithm
on the OneMax and Jump functions, on general unimodal functions, on minimum
spanning trees, and on a class of hurdle-like functions with varying hurdle
width that benefit particularly from the archive of promising mutation rates.
In all cases, the runtime bounds are close to or even outperform the best known
results for both stagnation detection and heavy-tailed mutations
Intuitive Analyses via Drift Theory
Humans are bad with probabilities, and the analysis of randomized algorithms
offers many pitfalls for the human mind. Drift theory is an intuitive tool for
reasoning about random processes. It allows turning expected stepwise changes
into expected first-hitting times. While drift theory is used extensively by
the community studying randomized search heuristics, it has seen hardly any
applications outside of this field, in spite of many research questions which
can be formulated as first-hitting times.
We state the most useful drift theorems and demonstrate their use for various
randomized processes, including approximating vertex cover, the coupon
collector process, a random sorting algorithm, and the Moran process. Finally,
we consider processes without expected stepwise change and give a lemma based
on drift theory applicable in such scenarios without drift. We use this tool
for the analysis of the gambler's ruin process, for a coloring algorithm, for
an algorithm for 2-SAT, and for a version of the Moran process without bias
A Spectral Independence View on Hard Spheres via Block Dynamics
The hard-sphere model is one of the most extensively studied models in statistical physics. It describes the continuous distribution of spherical particles, governed by hard-core interactions. An important quantity of this model is the normalizing factor of this distribution, called the partition function. We propose a Markov chain Monte Carlo algorithm for approximating the grand-canonical partition function of the hard-sphere model in d dimensions. Up to a fugacity of ? < e/2^d, the runtime of our algorithm is polynomial in the volume of the system. This covers the entire known real-valued regime for the uniqueness of the Gibbs measure.
Key to our approach is to define a discretization that closely approximates the partition function of the continuous model. This results in a discrete hard-core instance that is exponential in the size of the initial hard-sphere model. Our approximation bound follows directly from the correlation decay threshold of an infinite regular tree with degree equal to the maximum degree of our discretization. To cope with the exponential blow-up of the discrete instance we use clique dynamics, a Markov chain that was recently introduced in the setting of abstract polymer models. We prove rapid mixing of clique dynamics up to the tree threshold of the univariate hard-core model. This is achieved by relating clique dynamics to block dynamics and adapting the spectral expansion method, which was recently used to bound the mixing time of Glauber dynamics within the same parameter regime
The Impact of Geometry on Monochrome Regions in the Flip Schelling Process
Schelling’s classical segregation model gives a coherent explanation for the wide-spread phenomenon of residential segregation. We introduce an agent-based saturated open-city variant, the Flip Schelling Process (FSP), in which agents, placed on a graph, have one out of two types and, based on the predominant type in their neighborhood, decide whether to change their types; similar to a new agent arriving as soon as another agent leaves the vertex.
We investigate the probability that an edge {u,v} is monochrome, i.e., that both vertices u and v have the same type in the FSP, and we provide a general framework for analyzing the influence of the underlying graph topology on residential segregation. In particular, for two adjacent vertices, we show that a highly decisive common neighborhood, i.e., a common neighborhood where the absolute value of the difference between the number of vertices with different types is high, supports segregation and, moreover, that large common neighborhoods are more decisive.
As an application, we study the expected behavior of the FSP on two common random graph models with and without geometry: (1) For random geometric graphs, we show that the existence of an edge {u,v} makes a highly decisive common neighborhood for u and v more likely. Based on this, we prove the existence of a constant c > 0 such that the expected fraction of monochrome edges after the FSP is at least 1/2 + c. (2) For Erdős-Rényi graphs we show that large common neighborhoods are unlikely and that the expected fraction of monochrome edges after the FSP is at most 1/2 + o(1). Our results indicate that the cluster structure of the underlying graph has a significant impact on the obtained segregation strength
- …