564 research outputs found

    Neural Lyapunov Control

    Full text link
    We propose new methods for learning control policies and neural network Lyapunov functions for nonlinear control problems, with provable guarantee of stability. The framework consists of a learner that attempts to find the control and Lyapunov functions, and a falsifier that finds counterexamples to quickly guide the learner towards solutions. The procedure terminates when no counterexample is found by the falsifier, in which case the controlled nonlinear system is provably stable. The approach significantly simplifies the process of Lyapunov control design, provides end-to-end correctness guarantee, and can obtain much larger regions of attraction than existing methods such as LQR and SOS/SDP. We show experiments on how the new methods obtain high-quality solutions for challenging control problems.Comment: NeurIPS 201

    Decentralized Constraint Satisfaction

    Get PDF
    We show that several important resource allocation problems in wireless networks fit within the common framework of Constraint Satisfaction Problems (CSPs). Inspired by the requirements of these applications, where variables are located at distinct network devices that may not be able to communicate but may interfere, we define natural criteria that a CSP solver must possess in order to be practical. We term these algorithms decentralized CSP solvers. The best known CSP solvers were designed for centralized problems and do not meet these criteria. We introduce a stochastic decentralized CSP solver and prove that it will find a solution in almost surely finite time, should one exist, also showing it has many practically desirable properties. We benchmark the algorithm's performance on a well-studied class of CSPs, random k-SAT, illustrating that the time the algorithm takes to find a satisfying assignment is competitive with stochastic centralized solvers on problems with order a thousand variables despite its decentralized nature. We demonstrate the solver's practical utility for the problems that motivated its introduction by using it to find a non-interfering channel allocation for a network formed from data from downtown Manhattan

    The Configurable SAT Solver Challenge (CSSC)

    Get PDF
    It is well known that different solution strategies work well for different types of instances of hard combinatorial problems. As a consequence, most solvers for the propositional satisfiability problem (SAT) expose parameters that allow them to be customized to a particular family of instances. In the international SAT competition series, these parameters are ignored: solvers are run using a single default parameter setting (supplied by the authors) for all benchmark instances in a given track. While this competition format rewards solvers with robust default settings, it does not reflect the situation faced by a practitioner who only cares about performance on one particular application and can invest some time into tuning solver parameters for this application. The new Configurable SAT Solver Competition (CSSC) compares solvers in this latter setting, scoring each solver by the performance it achieved after a fully automated configuration step. This article describes the CSSC in more detail, and reports the results obtained in its two instantiations so far, CSSC 2013 and 2014

    Quiet Planting in the Locked Constraint Satisfaction Problems

    Full text link
    We study the planted ensemble of locked constraint satisfaction problems. We describe the connection between the random and planted ensembles. The use of the cavity method is combined with arguments from reconstruction on trees and first and second moment considerations; in particular the connection with the reconstruction on trees appears to be crucial. Our main result is the location of the hard region in the planted ensemble. In a part of that hard region instances have with high probability a single satisfying assignment.Comment: 21 pages, revised versio

    Bridging Elementary Landscapes and a Geometric Theory of Evolutionary Algorithms: First Steps

    Get PDF
    This is the author accepted manuscript. The final version is available from Springer via the DOI in this record.Paper to be presented at the Fifteenth International Conference on Parallel Problem Solving from Nature (PPSN XV), Coimbra, Portugal on 8-12 September.Based on a geometric theory of evolutionary algorithms, it was shown that all evolutionary algorithms equipped with a geometric crossover and no mutation operator do the same kind of convex search across representations, and that they are well matched with generalised forms of concave fitness landscapes for which they provably find the optimum in polynomial time. Analysing the landscape structure is essential to understand the relationship between problems and evolutionary algorithms. This paper continues such investigations by considering the following challenge: develop an analytical method to recognise that the fitness landscape for a given problem provably belongs to a class of concave fitness landscapes. Elementary landscapes theory provides analytic algebraic means to study the landscapes structure. This work begins linking both theories to better understand how such method could be devised using elementary landscapes. Examples on well known One Max, Leading Ones, Not-All-Equal Satisfiability and Weight Partitioning problems illustrate the fundamental concepts supporting this approach

    Efficient Benchmarking of Algorithm Configuration Procedures via Model-Based Surrogates

    Get PDF
    The optimization of algorithm (hyper-)parameters is crucial for achieving peak performance across a wide range of domains, ranging from deep neural networks to solvers for hard combinatorial problems. The resulting algorithm configuration (AC) problem has attracted much attention from the machine learning community. However, the proper evaluation of new AC procedures is hindered by two key hurdles. First, AC benchmarks are hard to set up. Second and even more significantly, they are computationally expensive: a single run of an AC procedure involves many costly runs of the target algorithm whose performance is to be optimized in a given AC benchmark scenario. One common workaround is to optimize cheap-to-evaluate artificial benchmark functions (e.g., Branin) instead of actual algorithms; however, these have different properties than realistic AC problems. Here, we propose an alternative benchmarking approach that is similarly cheap to evaluate but much closer to the original AC problem: replacing expensive benchmarks by surrogate benchmarks constructed from AC benchmarks. These surrogate benchmarks approximate the response surface corresponding to true target algorithm performance using a regression model, and the original and surrogate benchmark share the same (hyper-)parameter space. In our experiments, we construct and evaluate surrogate benchmarks for hyperparameter optimization as well as for AC problems that involve performance optimization of solvers for hard combinatorial problems, drawing training data from the runs of existing AC procedures. We show that our surrogate benchmarks capture overall important characteristics of the AC scenarios, such as high- and low-performing regions, from which they were derived, while being much easier to use and orders of magnitude cheaper to evaluate
    corecore