2,179 research outputs found

    Focused Local Search for Random 3-Satisfiability

    Full text link
    A local search algorithm solving an NP-complete optimisation problem can be viewed as a stochastic process moving in an 'energy landscape' towards eventually finding an optimal solution. For the random 3-satisfiability problem, the heuristic of focusing the local moves on the presently unsatisfiedclauses is known to be very effective: the time to solution has been observed to grow only linearly in the number of variables, for a given clauses-to-variables ratio α\alpha sufficiently far below the critical satisfiability threshold αc4.27\alpha_c \approx 4.27. We present numerical results on the behaviour of three focused local search algorithms for this problem, considering in particular the characteristics of a focused variant of the simple Metropolis dynamics. We estimate the optimal value for the ``temperature'' parameter η\eta for this algorithm, such that its linear-time regime extends as close to αc\alpha_c as possible. Similar parameter optimisation is performed also for the well-known WalkSAT algorithm and for the less studied, but very well performing Focused Record-to-Record Travel method. We observe that with an appropriate choice of parameters, the linear time regime for each of these algorithms seems to extend well into ratios α>4.2\alpha > 4.2 -- much further than has so far been generally assumed. We discuss the statistics of solution times for the algorithms, relate their performance to the process of ``whitening'', and present some conjectures on the shape of their computational phase diagrams.Comment: 20 pages, lots of figure

    Efficient Benchmarking of Algorithm Configuration Procedures via Model-Based Surrogates

    Get PDF
    The optimization of algorithm (hyper-)parameters is crucial for achieving peak performance across a wide range of domains, ranging from deep neural networks to solvers for hard combinatorial problems. The resulting algorithm configuration (AC) problem has attracted much attention from the machine learning community. However, the proper evaluation of new AC procedures is hindered by two key hurdles. First, AC benchmarks are hard to set up. Second and even more significantly, they are computationally expensive: a single run of an AC procedure involves many costly runs of the target algorithm whose performance is to be optimized in a given AC benchmark scenario. One common workaround is to optimize cheap-to-evaluate artificial benchmark functions (e.g., Branin) instead of actual algorithms; however, these have different properties than realistic AC problems. Here, we propose an alternative benchmarking approach that is similarly cheap to evaluate but much closer to the original AC problem: replacing expensive benchmarks by surrogate benchmarks constructed from AC benchmarks. These surrogate benchmarks approximate the response surface corresponding to true target algorithm performance using a regression model, and the original and surrogate benchmark share the same (hyper-)parameter space. In our experiments, we construct and evaluate surrogate benchmarks for hyperparameter optimization as well as for AC problems that involve performance optimization of solvers for hard combinatorial problems, drawing training data from the runs of existing AC procedures. We show that our surrogate benchmarks capture overall important characteristics of the AC scenarios, such as high- and low-performing regions, from which they were derived, while being much easier to use and orders of magnitude cheaper to evaluate

    08051 Abstracts Collection -- Theory of Evolutionary Algorithms

    Get PDF
    From Jan. 27, 2008 to Feb. 1, 2008, the Dagstuhl Seminar 08051 ``Theory of Evolutionary Algorithms\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Effects of the lack of selective pressure on the expected run-time distribution in genetic programming

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. D. F. Barrero, M. D. R-Moreno, B. Castano, and D. Camacho, "Effects of the lack of selective pressure on the expected run-time distribution in genetic programming", in IEEE Congress on Evolutionary Computation, CEC 2013, pp. 1748 - 1755Run-time analysis is a powerful tool to analyze algorithms. It is focused on studying the time required by an algorithm to find a solution, the expected run-time, which is one of the most relevant algorithm attributes. Previous research has associated the expected run-time in GP with the lognormal distribution. In this paper we provide additional evidence in that regard and show how the algorithm parametrization may change the resulting run-time distribution. In particular, we explore the influence of the selective pressure on the run-time distribution in tree-based GP, finding that, at least in two problem instances, the lack of selective pressure generates an expected run-time distribution well described by the Weibull probability distribution.This work has been partly supported by Spanish Ministry of Science and Education under project ABANT (TIN2010- 19872)

    Characterizing CDMA downlink feasibility via effective interference

    Get PDF
    This paper models and analyses downlink power assignment feasibility in Code Division Multiple Access (CDMA) mobile networks. By discretizing the area into small segments, the power requirements are characterized via a matrix representation that separates user and system characteristics. We obtain a closed-form analytical expression of the so-called Perron-Frobenius eigenvalue of that matrix, which provides a quick assessment of the feasibility of the power assignment for each distribution of calls over the segments. Although the obtained relation is non-linear, it basically provides an effective interference characterisation of downlink feasibility. Our results allow for a fast evaluation of outage and blocking probabilities, and enable a quick evaluation of feasibility that may be used for Call Acceptance Control. \u

    Initialization and Restart in Stochastic Local Search: Computing a Most Probable Explanation in Bayesian Networks

    Get PDF
    For hard computational problems, stochastic local search has proven to be a competitive approach to finding optimal or approximately optimal problem solutions. Two key research questions for stochastic local search algorithms are: Which algorithms are effective for initialization? When should the search process be restarted? In the present work we investigate these research questions in the context of approximate computation of most probable explanations (MPEs) in Bayesian networks (BNs). We introduce a novel approach, based on the Viterbi algorithm, to explanation initialization in BNs. While the Viterbi algorithm works on sequences and trees, our approach works on BNs with arbitrary topologies. We also give a novel formalization of stochastic local search, with focus on initialization and restart, using probability theory and mixture models. Experimentally, we apply our methods to the problem of MPE computation, using a stochastic local search algorithm known as Stochastic Greedy Search. By carefully optimizing both initialization and restart, we reduce the MPE search time for application BNs by several orders of magnitude compared to using uniform at random initialization without restart. On several BNs from applications, the performance of Stochastic Greedy Search is competitive with clique tree clustering, a state-of-the-art exact algorithm used for MPE computation in BNs
    corecore