87 research outputs found
The Cohen-Lenstra Heuristic: Methodology and Results
In number theory, great efforts have been undertaken to study the
Cohen-Lenstra probability measure on the set of all finite abelian -groups.
On the other hand, group theorists have studied a probability measure on the
set of all partitions induced by the probability that a randomly chosen
-matrix over \FF_p is contained in a conjucagy class associated
with this partitions, for .
This paper shows that both probability measures are identical. As a
consequence, a multitide of results can be transferred from each theory to the
other one. The paper contains a survey about the known methods to study the
probability measure and about the results that have been obtained so far, from
both communities
The Global Cohen-Lenstra Heuristic
The Cohen-Lenstra heuristic is a universal principle that assigns to each
group a probability that tells how often this group should occur "in nature".
The most important, but not the only, applications are sequences of class
groups, which behave like random sequences of groups with respect to the
so-called Cohen-Lenstra probability measure.
So far, it was only possible to define this probability measure for finite
abelian -groups. We prove that it is also possible to define an analogous
probability measure on the set of \emph{all} finite abelian groups when
restricting to the -algebra on the set of all finite abelian groups
that is generated by uniform properties, thereby solving a problem that was
open since 1984
OneMax in Black-Box Models with Several Restrictions
Black-box complexity studies lower bounds for the efficiency of
general-purpose black-box optimization algorithms such as evolutionary
algorithms and other search heuristics. Different models exist, each one being
designed to analyze a different aspect of typical heuristics such as the memory
size or the variation operators in use. While most of the previous works focus
on one particular such aspect, we consider in this work how the combination of
several algorithmic restrictions influence the black-box complexity. Our
testbed are so-called OneMax functions, a classical set of test functions that
is intimately related to classic coin-weighing problems and to the board game
Mastermind.
We analyze in particular the combined memory-restricted ranking-based
black-box complexity of OneMax for different memory sizes. While its isolated
memory-restricted as well as its ranking-based black-box complexity for bit
strings of length is only of order , the combined model does not
allow for algorithms being faster than linear in , as can be seen by
standard information-theoretic considerations. We show that this linear bound
is indeed asymptotically tight. Similar results are obtained for other memory-
and offspring-sizes. Our results also apply to the (Monte Carlo) complexity of
OneMax in the recently introduced elitist model, in which only the best-so-far
solution can be kept in the memory. Finally, we also provide improved lower
bounds for the complexity of OneMax in the regarded models.
Our result enlivens the quest for natural evolutionary algorithms optimizing
OneMax in iterations.Comment: This is the full version of a paper accepted to GECCO 201
Random Sampling with Removal
Random sampling is a classical tool in constrained optimization. Under favorable conditions, the optimal solution subject to a small subset of randomly chosen constraints violates only a small subset of the remaining constraints. Here we study the following variant that we call random sampling with removal: suppose that after sampling the subset, we remove a fixed number of constraints from the sample, according to an arbitrary rule. Is it still true that the optimal solution of the reduced sample violates only a small subset of the constraints?
The question naturally comes up in situations where the solution subject to the sampled constraints is used as an approximate solution to the original problem. In this case, it makes sense to improve cost and volatility of the sample solution by removing some of the constraints that appear most restricting. At the same time, the approximation quality (measured in terms of violated constraints) should remain high.
We study random sampling with removal in a generalized, completely abstract setting where we assign to each subset R of the constraints an arbitrary set V(R) of constraints disjoint from R; in applications, V(R) corresponds to the constraints violated by the optimal solution subject to only the constraints in R. Furthermore, our results are parametrized by the dimension d, i.e., we assume that every set R has a subset B of size at most d with the same set of violated constraints. This is the first time this generalized setting is studied.
In this setting, we prove matching upper and lower bounds for the expected number of constraints violated by a random sample, after the removal of k elements. For a large range of values of k, the new upper bounds improve the previously best bounds for LP-type problems, which moreover had only been known in special cases. We show that this bound on special LP-type problems, can be derived in the much more general setting of violator spaces, and with very elementary proofs
A formula for the probability of the exponents of finite p-groups
In this paper, I will introduce a link between the volume of a finite p-group in the Cohen-Lenstra measure and partitions of a certain type. These partitions will be classified by the output of an algorithm. As a corollary, I will give a formula for the probability of a p-group to have a specific exponent
- …