265 research outputs found

    Runtime Distributions and Criteria for Restarts

    Full text link
    Randomized algorithms sometimes employ a restart strategy. After a certain number of steps, the current computation is aborted and restarted with a new, independent random seed. In some cases, this results in an improved overall expected runtime. This work introduces properties of the underlying runtime distribution which determine whether restarts are advantageous. The most commonly used probability distributions admit the use of a scale and a location parameter. Location parameters shift the density function to the right, while scale parameters affect the spread of the distribution. It is shown that for all distributions scale parameters do not influence the usefulness of restarts and that location parameters only have a limited influence. This result simplifies the analysis of the usefulness of restarts. The most important runtime probability distributions are the log-normal, the Weibull, and the Pareto distribution. In this work, these distributions are analyzed for the usefulness of restarts. Secondly, a condition for the optimal restart time (if it exists) is provided. The log-normal, the Weibull, and the generalized Pareto distribution are analyzed in this respect. Moreover, it is shown that the optimal restart time is also not influenced by scale parameters and that the influence of location parameters is only linear

    Gaussian Bounds for Noise Correlation of Functions

    Get PDF
    In this paper we derive tight bounds on the expected value of products of {\em low influence} functions defined on correlated probability spaces. The proofs are based on extending Fourier theory to an arbitrary number of correlated probability spaces, on a generalization of an invariance principle recently obtained with O'Donnell and Oleszkiewicz for multilinear polynomials with low influences and bounded degree and on properties of multi-dimensional Gaussian distributions. The results derived here have a number of applications to the theory of social choice in economics, to hardness of approximation in computer science and to additive combinatorics problems.Comment: Typos and references correcte

    The Potential of Restarts for ProbSAT

    Full text link
    This work analyses the potential of restarts for probSAT, a quite successful algorithm for k-SAT, by estimating its runtime distributions on random 3-SAT instances that are close to the phase transition. We estimate an optimal restart time from empirical data, reaching a potential speedup factor of 1.39. Calculating restart times from fitted probability distributions reduces this factor to a maximum of 1.30. A spin-off result is that the Weibull distribution approximates the runtime distribution for over 93% of the used instances well. A machine learning pipeline is presented to compute a restart time for a fixed-cutoff strategy to exploit this potential. The main components of the pipeline are a random forest for determining the distribution type and a neural network for the distribution's parameters. ProbSAT performs statistically significantly better than Luby's restart strategy and the policy without restarts when using the presented approach. The structure is particularly advantageous on hard problems.Comment: Eurocast 201

    Neural Networks for Predicting Algorithm Runtime Distributions

    Full text link
    Many state-of-the-art algorithms for solving hard combinatorial problems in artificial intelligence (AI) include elements of stochasticity that lead to high variations in runtime, even for a fixed problem instance. Knowledge about the resulting runtime distributions (RTDs) of algorithms on given problem instances can be exploited in various meta-algorithmic procedures, such as algorithm selection, portfolios, and randomized restarts. Previous work has shown that machine learning can be used to individually predict mean, median and variance of RTDs. To establish a new state-of-the-art in predicting RTDs, we demonstrate that the parameters of an RTD should be learned jointly and that neural networks can do this well by directly optimizing the likelihood of an RTD given runtime observations. In an empirical study involving five algorithms for SAT solving and AI planning, we show that neural networks predict the true RTDs of unseen instances better than previous methods, and can even do so when only few runtime observations are available per training instance

    Correlation Decay and Tractability of CSPs

    Get PDF
    The algebraic dichotomy conjecture of Bulatov, Krokhin and Jeavons yields an elegant characterization of the complexity of constraint satisfaction problems. Roughly speaking, the characterization asserts that a CSP L is tractable if and only if there exist certain non-trivial operations known as polymorphisms to combine solutions to L to create new ones. In this work, we study the dynamical system associated with repeated applications of a polymorphism to a distribution over assignments. Specifically, we exhibit a correlation decay phenomenon that makes two variables or groups of variables that are not perfectly correlated become independent after repeated applications of a polymorphism. We show that this correlation decay phenomenon can be utilized in designing algorithms for CSPs by exhibiting two applications: 1. A simple randomized algorithm to solve linear equations over a prime field, whose analysis crucially relies on correlation decay. 2. A sufficient condition for the simple linear programming relaxation for a 2-CSP to be sound (have no integrality gap) on a given instance

    Effects of the lack of selective pressure on the expected run-time distribution in genetic programming

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. D. F. Barrero, M. D. R-Moreno, B. Castano, and D. Camacho, "Effects of the lack of selective pressure on the expected run-time distribution in genetic programming", in IEEE Congress on Evolutionary Computation, CEC 2013, pp. 1748 - 1755Run-time analysis is a powerful tool to analyze algorithms. It is focused on studying the time required by an algorithm to find a solution, the expected run-time, which is one of the most relevant algorithm attributes. Previous research has associated the expected run-time in GP with the lognormal distribution. In this paper we provide additional evidence in that regard and show how the algorithm parametrization may change the resulting run-time distribution. In particular, we explore the influence of the selective pressure on the run-time distribution in tree-based GP, finding that, at least in two problem instances, the lack of selective pressure generates an expected run-time distribution well described by the Weibull probability distribution.This work has been partly supported by Spanish Ministry of Science and Education under project ABANT (TIN2010- 19872)

    Noise stability of functions with low influences: invariance and optimality

    Full text link
    In this paper we study functions with low influences on product probability spaces. The analysis of boolean functions with low influences has become a central problem in discrete Fourier analysis. It is motivated by fundamental questions arising from the construction of probabilistically checkable proofs in theoretical computer science and from problems in the theory of social choice in economics. We prove an invariance principle for multilinear polynomials with low influences and bounded degree; it shows that under mild conditions the distribution of such polynomials is essentially invariant for all product spaces. Ours is one of the very few known non-linear invariance principles. It has the advantage that its proof is simple and that the error bounds are explicit. We also show that the assumption of bounded degree can be eliminated if the polynomials are slightly ``smoothed''; this extension is essential for our applications to ``noise stability''-type problems. In particular, as applications of the invariance principle we prove two conjectures: the ``Majority Is Stablest'' conjecture from theoretical computer science, which was the original motivation for this work, and the ``It Ain't Over Till It's Over'' conjecture from social choice theory

    Evidence for Long-Tails in SLS Algorithms

    Get PDF
    • …
    corecore