19 research outputs found

    The Language of Search

    Full text link
    This paper is concerned with a class of algorithms that perform exhaustive search on propositional knowledge bases. We show that each of these algorithms defines and generates a propositional language. Specifically, we show that the trace of a search can be interpreted as a combinational circuit, and a search algorithm then defines a propositional language consisting of circuits that are generated across all possible executions of the algorithm. In particular, we show that several versions of exhaustive DPLL search correspond to such well-known languages as FBDD, OBDD, and a precisely-defined subset of d-DNNF. By thus mapping search algorithms to propositional languages, we provide a uniform and practical framework in which successful search techniques can be harnessed for compilation of knowledge into various languages of interest, and a new methodology whereby the power and limitations of search algorithms can be understood by looking up the tractability and succinctness of the corresponding propositional languages

    Les Houches 2015: Physics at TeV Colliders Standard Model Working Group Report

    Get PDF
    This Report summarizes the proceedings of the 2015 Les Houches workshop on Physics at TeV Colliders. Session 1 dealt with (I) new developments relevant for high precision Standard Model calculations, (II) the new PDF4LHC parton distributions, (III) issues in the theoretical description of the production of Standard Model Higgs bosons and how to relate experimental measurements, (IV) a host of phenomenological studies essential for comparing LHC data from Run I with theoretical predictions and projections for future measurements in Run II, and (V) new developments in Monte Carlo event generators.Comment: Proceedings of the Standard Model Working Group of the 2015 Les Houches Workshop, Physics at TeV Colliders, Les Houches 1-19 June 2015. 227 page

    An implementation of the DPLL algorithm

    Get PDF
    The satisfiability problem (or SAT for short) is a central problem in several fields of computer science, including theoretical computer science, artificial intelligence, hardware design, and formal verification. Because of its inherent difficulty and widespread applications, this problem has been intensely being studied by mathematicians and computer scientists for the past few decades. For more than forty years, the Davis-Putnam-Logemann-Loveland (DPLL) backtrack-search algorithm has been immensely popular as a complete (it finds a solution if one exists; otherwise correctly says that no solution exists) and efficient procedure to solve the satisfiability problem. We have implemented an efficient variant of the DPLL algorithm. In this thesis, we discuss the details of our implementation of the DPLL algorithm as well as a mathematical application of our solver. We have proposed an improved variant of the DPLL algorithm and designed an efficient data structure for it. We have come up with an idea to make the unit-propagation faster than the known SAT solvers and to maintain the stack of changes efficiently. Our implementation performs well on most instances of the DIMACS benchmarks and it performs better than other SAT-solvers on a certain class of instances. We have implemented the solver in the C programming language and we discuss almost every detail of our implementation in the thesis. An interesting mathematical application of our solver is finding van der Waerden numbers, which are known to be very difficult to compute. Our solver performs the best on the class of instances corresponding to van der Waerden numbers. We have computed thirty of these numbers, which were previously unknown, using our solver

    Data Mining-Based Decomposition for Solving the MAXSAT Problem: Toward a New Approach

    Get PDF
    This article explores advances in the data mining arena to solve the fundamental MAXSAT problem. In the proposed approach, the MAXSAT instance is first decomposed and clustered by using data mining decomposition techniques, then every cluster resulting from the decomposition is separately solved to construct a partial solution. All partial solutions are merged into a global one, while managing possible conflicting variables due to separate resolutions. The proposed approach has been numerically evaluated on DIMACS instances and some hard Uniform-Random-3-SAT instances, and compared to state-of-the-art decomposition based algorithms. The results show that the proposed approach considerably improves the success rate, with a competitive computation time that's very close to that of the compared solutions

    The PDF4LHC report on PDFs and LHC data: Results from Run I and preparation for Run II

    Get PDF
    The accurate determination of the Parton Distribution Functions (PDFs) of the proton is an essential ingredient of the Large Hadron Collider (LHC) program. PDF uncertainties impact a wide range of processes, from Higgs boson characterisation and precision Standard Model measurements to New Physics searches. A major recent development in modern PDF analyses has been to exploit the wealth of new information contained in precision measurements from the LHC Run I, as well as progress in tools and methods to include these data in PDF fits. In this report we summarise the information that PDF-sensitive measurements at the LHC have provided so far, and review the prospects for further constraining PDFs with data from the recently started Run II. This document aims to provide useful input to the LHC collaborations to prioritise their PDF-sensitive measurements at Run II, as well as a comprehensive reference for the PDF-fitting collaborations.Comment: 55 pages, 13 figure

    Design, implementation and evaluation of a distributed CDCL framework

    Get PDF
    The primary subject of this dissertation is practically solving instances of the Boolean satisfiability problem (SAT) that arise from industrial applications. The invention of the conflict-driven clause-learning (CDCL) algorithm led to enormous progress in this field. CDCL has been augmented with effective pre- and inprocessing techniques that boost its effectiveness. While a considerable amount of work has been done on applying shared-memory parallelism to enhance the performance of CDCL, solving SAT on distributed architectures is studied less thoroughly. In this work, we develop a distributed, CDCL-based framework for SAT solving. This framework consists of three main components: 1. An implementation of the CDCL algorithm that we have written from scratch, 2. a novel, parallel SAT algorithm that builds upon this CDCL implementation and 3. a collection of parallel simplification techniques for SAT instances. We call our resulting framework satUZK; our parallel solving algorithm is called the distributed divide-and-conquer (DDC) algorithm. The DDC algorithm employs a parallel lookahead procedure to dynamically partition the search space. Load balancing is used to ensure that all computational resources are utilized during lookahead. This procedure results in a divide-and-conquer tree that is distributed over all processors. Individual threads are routed through this tree until they arrive at unsolved leaf vertices. Upon arrival, the lookahead procedure is invoked again or the leaf vertex is solved via CDCL. Several extensions to the DDC algorithm are proposed. These include clause sharing and a scheme to locally adjust the LBD score relative to the current search tree vertex. LBD is a measure for the usefulness of clauses that participate in a CDCL search. We evaluate our DDC algorithm empirically and benchmark it against the best distributed SAT algorithms. In this experiment, our DDC algorithm is faster than other distributed, state-of-the-art solvers and solves at least as many instances. In addition to running a parallel algorithm for SAT solving we also consider parallel simplifcation. Here, we first develop a theoretical foundation that allows us to prove the correctness of parallel simplification techniques. Using this as a basis, we examine established simplification algorithms for their parallelizability. It turns out that several well-known simplification techniques can be parallelized efficiently. We provide parallel implementation of the techniques and test their effectiveness in empirical experiments. This evaluation finds several combinations of simplification techniques that can solve instances which could not be solved by the DDC algorithm alone

    Why solutions can be hard to find: a featural theory of cost for a local search algorithm on random satisfiability instances

    Get PDF
    The local search algorithm WSat is one of the most successful algorithms for solving the archetypal NP-complete problem of satisfiability (SAT). It is notably effective at solving Random-3-SAT instances near the so-called 'satisfiability threshold', which are thought to be universally hard. However, WSat still shows a peak in search cost near the threshold and large variations in cost over different instances. Why are solutions to the threshold instances so hard to find using WSat? What features characterise threshold instances which make them difficult for WSat to solve? We make a number of significant contributions to the analysis of WSat on these high-cost random instances, using the recently-introduced concept of the backbone of a SAT instance. The backbone is the set of literals which are implicates of an instance. We find that the number of solutions predicts the cost well for small-backbone instances but is much less relevant for the large-backbone instances which appear near the threshold and dominate in the overconstrained region. We undertake a detailed study of the behaviour of the algorithm during search and uncover some interesting patterns. These patterns lead us to introduce a measure of the backbone fragility of an instance, which indicates how persistent the backbone is as clauses are removed. We propose that high-cost random instances for WSat are those with large backbones which are also backbone-fragile. We suggest that the decay in cost for WSat beyond the satisfiability threshold, which has perplexed a number of researchers, is due to the decreasing backbone fragility. Our hypothesis makes three correct predictions. First, that a measure of the backbone robustness of an instance (the opposite to backbone fragility) is negatively correlated with the WSat cost when other factors are controlled for. Second, that backbone-minimal instances (which are 3-SAT instances altered so as to be more backbone-fragile) are unusually hard for WSat. Third, that the clauses most often unsatisfied during search are those whose deletion has the most effect on the backbone. Our analysis of WSat on random-3-SAT threshold instances can be seen as a featural theory of WSat cost, predicting features of cost behaviour from structural features of SAT instances. In this thesis, we also present some initial studies which investigate whether the scope of this featural theory can be broadened to other kinds of random SAT instance. random-2+p-SAT interpolates between the polynomial-time problem Random-2-SAT when p = 0 and Random-3-SAT when p = 1. At some value p ~ pq ~ 0.41, a dramatic change in the structural nature of instances is predicted by statistical mechanics methods, which may imply the appearance of backbone fragile instances. We tested NovELTY+, a recent variant of WSat, on rand o m- 2 +p-SAT and find some evidence that growth of its median cost changes from polynomial to superpolynomial between p = 0.3 and p = 0.5. We also find evidence that it is the onset of backbone fragility which is the cause of this change in cost scaling: typical instances at p — 0.5 are more backbone-fragile than their counterparts at p — 0.3. Not-All-Equal (NAE) 3-SAT is a variant of the SAT problem which is similar to it in most respects. However, for NAE 3-SAT instances no implicate literals are possible. Hence the backbone for NAE 3-SAT must be redefined. We show that under a redefinition of the backbone, the pattern of factors influencing WSat cost at the NAE Random-3-SAT threshold is much the same as in Random-3-SAT, including the role of backbone fragility
    corecore