481 research outputs found

    The Cohen-Lenstra Heuristic: Methodology and Results

    Get PDF
    In number theory, great efforts have been undertaken to study the Cohen-Lenstra probability measure on the set of all finite abelian pp-groups. On the other hand, group theorists have studied a probability measure on the set of all partitions induced by the probability that a randomly chosen n×nn\times n-matrix over \FF_p is contained in a conjucagy class associated with this partitions, for nn \to \infty. This paper shows that both probability measures are identical. As a consequence, a multitide of results can be transferred from each theory to the other one. The paper contains a survey about the known methods to study the probability measure and about the results that have been obtained so far, from both communities

    The Global Cohen-Lenstra Heuristic

    Full text link
    The Cohen-Lenstra heuristic is a universal principle that assigns to each group a probability that tells how often this group should occur "in nature". The most important, but not the only, applications are sequences of class groups, which behave like random sequences of groups with respect to the so-called Cohen-Lenstra probability measure. So far, it was only possible to define this probability measure for finite abelian pp-groups. We prove that it is also possible to define an analogous probability measure on the set of \emph{all} finite abelian groups when restricting to the Σ\Sigma-algebra on the set of all finite abelian groups that is generated by uniform properties, thereby solving a problem that was open since 1984

    OneMax in Black-Box Models with Several Restrictions

    Full text link
    Black-box complexity studies lower bounds for the efficiency of general-purpose black-box optimization algorithms such as evolutionary algorithms and other search heuristics. Different models exist, each one being designed to analyze a different aspect of typical heuristics such as the memory size or the variation operators in use. While most of the previous works focus on one particular such aspect, we consider in this work how the combination of several algorithmic restrictions influence the black-box complexity. Our testbed are so-called OneMax functions, a classical set of test functions that is intimately related to classic coin-weighing problems and to the board game Mastermind. We analyze in particular the combined memory-restricted ranking-based black-box complexity of OneMax for different memory sizes. While its isolated memory-restricted as well as its ranking-based black-box complexity for bit strings of length nn is only of order n/lognn/\log n, the combined model does not allow for algorithms being faster than linear in nn, as can be seen by standard information-theoretic considerations. We show that this linear bound is indeed asymptotically tight. Similar results are obtained for other memory- and offspring-sizes. Our results also apply to the (Monte Carlo) complexity of OneMax in the recently introduced elitist model, in which only the best-so-far solution can be kept in the memory. Finally, we also provide improved lower bounds for the complexity of OneMax in the regarded models. Our result enlivens the quest for natural evolutionary algorithms optimizing OneMax in o(nlogn)o(n \log n) iterations.Comment: This is the full version of a paper accepted to GECCO 201

    Sampling Geometric Inhomogeneous Random Graphs in Linear Time

    Get PDF
    Real-world networks, like social networks or the internet infrastructure, have structural properties such as large clustering coefficients that can best be described in terms of an underlying geometry. This is why the focus of the literature on theoretical models for real-world networks shifted from classic models without geometry, such as Chung-Lu random graphs, to modern geometry-based models, such as hyperbolic random graphs. With this paper we contribute to the theoretical analysis of these modern, more realistic random graph models. Instead of studying directly hyperbolic random graphs, we use a generalization that we call geometric inhomogeneous random graphs (GIRGs). Since we ignore constant factors in the edge probabilities, GIRGs are technically simpler (specifically, we avoid hyperbolic cosines), while preserving the qualitative behaviour of hyperbolic random graphs, and we suggest to replace hyperbolic random graphs by this new model in future theoretical studies. We prove the following fundamental structural and algorithmic results on GIRGs. (1) As our main contribution we provide a sampling algorithm that generates a random graph from our model in expected linear time, improving the best-known sampling algorithm for hyperbolic random graphs by a substantial factor O(n^0.5). (2) We establish that GIRGs have clustering coefficients in {\Omega}(1), (3) we prove that GIRGs have small separators, i.e., it suffices to delete a sublinear number of edges to break the giant component into two large pieces, and (4) we show how to compress GIRGs using an expected linear number of bits.Comment: 25 page

    Random Sampling with Removal

    Get PDF
    Random sampling is a classical tool in constrained optimization. Under favorable conditions, the optimal solution subject to a small subset of randomly chosen constraints violates only a small subset of the remaining constraints. Here we study the following variant that we call random sampling with removal: suppose that after sampling the subset, we remove a fixed number of constraints from the sample, according to an arbitrary rule. Is it still true that the optimal solution of the reduced sample violates only a small subset of the constraints? The question naturally comes up in situations where the solution subject to the sampled constraints is used as an approximate solution to the original problem. In this case, it makes sense to improve cost and volatility of the sample solution by removing some of the constraints that appear most restricting. At the same time, the approximation quality (measured in terms of violated constraints) should remain high. We study random sampling with removal in a generalized, completely abstract setting where we assign to each subset R of the constraints an arbitrary set V(R) of constraints disjoint from R; in applications, V(R) corresponds to the constraints violated by the optimal solution subject to only the constraints in R. Furthermore, our results are parametrized by the dimension d, i.e., we assume that every set R has a subset B of size at most d with the same set of violated constraints. This is the first time this generalized setting is studied. In this setting, we prove matching upper and lower bounds for the expected number of constraints violated by a random sample, after the removal of k elements. For a large range of values of k, the new upper bounds improve the previously best bounds for LP-type problems, which moreover had only been known in special cases. We show that this bound on special LP-type problems, can be derived in the much more general setting of violator spaces, and with very elementary proofs
    corecore