10 research outputs found

    Clustering of solutions in hard satisfiability problems

    Full text link
    We study the structure of the solution space and behavior of local search methods on random 3-SAT problems close to the SAT/UNSAT transition. Using the overlap measure of similarity between different solutions found on the same problem instance we show that the solution space is shrinking as a function of alpha. We consider chains of satisfiability problems, where clauses are added sequentially. In each such chain, the overlap distribution is first smooth, and then develops a tiered structure, indicating that the solutions are found in well separated clusters. On chains of not too large instances, all solutions are eventually observed to be in only one small cluster before vanishing. This condensation transition point is estimated to be alpha_c = 4.26. The transition approximately obeys finite-size scaling with an apparent critical exponent of about 1.7. We compare the solutions found by a local heuristic, ASAT, and the Survey Propagation algorithm up to alpha_c.Comment: 8 pages, 9 figure

    Behavior of heuristics and state space structure near SAT/UNSAT transition

    Full text link
    We study the behavior of ASAT, a heuristic for solving satisfiability problems by stochastic local search near the SAT/UNSAT transition. The heuristic is focused, i.e. only variables in unsatisfied clauses are updated in each step, and is significantly simpler, while similar to, walksat or Focused Metropolis Search. We show that ASAT solves instances as large as one million variables in linear time, on average, up to 4.21 clauses per variable for random 3SAT. For K higher than 3, ASAT appears to solve instances at the ``FRSB threshold'' in linear time, up to K=7.Comment: 12 pages, 6 figures, longer version available as MSc thesis of first author at http://biophys.physics.kth.se/docs/ardelius_thesis.pd

    On complexity of optimized crossover for binary representations

    Get PDF
    We consider the computational complexity of producing the best possible offspring in a crossover, given two solutions of the parents. The crossover operators are studied on the class of Boolean linear programming problems, where the Boolean vector of variables is used as the solution representation. By means of efficient reductions of the optimized gene transmitting crossover problems (OGTC) we show the polynomial solvability of the OGTC for the maximum weight set packing problem, the minimum weight set partition problem and for one of the versions of the simple plant location problem. We study a connection between the OGTC for linear Boolean programming problem and the maximum weight independent set problem on 2-colorable hypergraph and prove the NP-hardness of several special cases of the OGTC problem in Boolean linear programming.Comment: Dagstuhl Seminar 06061 "Theory of Evolutionary Algorithms", 200

    Reductions for the Stable Set Problem

    Get PDF
    One approach to finding a maximum stable set (MSS) in a graph is to try to reduce the size of the problem by transforming the problem into an equivalent problem on a smaller graph. This paper introduces several new reductions for the MSS problem, extends several well-known reductions to the maximum weight stable set (MWSS) problem, demonstrates how reductions for the generalized stable set problem can be used in conjunction with probing to produce powerful new reductions for both the MSS and MWSS problems, and shows how hypergraphs can be used to expand the capabilities of clique projections. The effectiveness of these new reduction techniques are illustrated on the DIMACS benchmark graphs, planar graphs, and a set of challenging MSS problems arising from Steiner Triple Systems

    Using deep learning to construct stochastic local search SAT solvers with performance bounds

    Full text link
    The Boolean Satisfiability problem (SAT) is the most prototypical NP-complete problem and of great practical relevance. One important class of solvers for this problem are stochastic local search (SLS) algorithms that iteratively and randomly update a candidate assignment. Recent breakthrough results in theoretical computer science have established sufficient conditions under which SLS solvers are guaranteed to efficiently solve a SAT instance, provided they have access to suitable "oracles" that provide samples from an instance-specific distribution, exploiting an instance's local structure. Motivated by these results and the well established ability of neural networks to learn common structure in large datasets, in this work, we train oracles using Graph Neural Networks and evaluate them on two SLS solvers on random SAT instances of varying difficulty. We find that access to GNN-based oracles significantly boosts the performance of both solvers, allowing them, on average, to solve 17% more difficult instances (as measured by the ratio between clauses and variables), and to do so in 35% fewer steps, with improvements in the median number of steps of up to a factor of 8. As such, this work bridges formal results from theoretical computer science and practically motivated research on deep learning for constraint satisfaction problems and establishes the promise of purpose-trained SAT solvers with performance guarantees.Comment: 15 pages, 9 figures, code available at https://github.com/porscheofficial/sls_sat_solving_with_deep_learnin

    Compiler architecture using a portable intermediate language

    Get PDF
    The back end of a compiler performs machine-dependent tasks and low-level optimisations that are laborious to implement and difficult to debug. In addition, in languages that require run-time services such as garbage collection, the back end must interface with the run-time system to provide those services. The net result is that building a compiler back end entails a high implementation cost. In this dissertation I describe reusable code generation infrastructure that enables the construction of a complete programming language implementation (compiler and run-time system) with reduced effort. The infrastructure consists of a portable intermediate language, a compiler for this language and a low-level run-time system. I provide an implementation of this system and I show that it can support a variety of source programming languages, it reduces the overall eort required to implement a programming language, it can capture and retain information necessary to support run-time services and optimisations, and it produces efficient code

    Hybrid eager and lazy evaluation for efficient compilation of Haskell

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.Includes bibliographical references (p. 208-220).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.The advantage of a non-strict, purely functional language such as Haskell lies in its clean equational semantics. However, lazy implementations of Haskell fall short: they cannot express tail recursion gracefully without annotation. We describe resource-bounded hybrid evaluation, a mixture of strict and lazy evaluation, and its realization in Eager Haskell. From the programmer's perspective, Eager Haskell is simply another implementation of Haskell with the same clean equational semantics. Iteration can be expressed using tail recursion, without the need to resort to program annotations. Under hybrid evaluation, computations are ordinarily executed in program order just as in a strict functional language. When particular stack, heap, or time bounds are exceeded, suspensions are generated for all outstanding computations. These suspensions are re-started in a demand-driven fashion from the root. The Eager Haskell compiler translates Ac, the compiler's intermediate representation, to efficient C code. We use an equational semantics for Ac to develop simple correctness proofs for program transformations, and connect actions in the run-time system to steps in the hybrid evaluation strategy.(cont.) The focus of compilation is efficiency in the common case of straight-line execution; the handling of non-strictness and suspension are left to the run-time system. Several additional contributions have resulted from the implementation of hybrid evaluation. Eager Haskell is the first eager compiler to use a call stack. Our generational garbage collector uses this stack as an additional predictor of object lifetime. Objects above a stack watermark are assumed to be likely to die; we avoid promoting them. Those below are likely to remain untouched and therefore are good candidates for promotion. To avoid eagerly evaluating error checks, they are compiled into special bottom thunks, which are treated specially by the run-time system. The compiler identifies error handling code using a mixture of strictness and type information. This information is also used to avoid inlining error handlers, and to enable aggressive program transformation in the presence of error handling.by Jan-Willem Maessen.Ph.D
    corecore