230 research outputs found

    The impact of heterogeneity and geometry on the proof complexity of random satisfiability

    Get PDF
    Satisfiability is considered the canonical NP-complete problem and is used as a starting point for hardness reductions in theory, while in practice heuristic SAT solving algorithms can solve large-scale industrial SAT instances very efficiently. This disparity between theory and practice is believed to be a result of inherent properties of industrial SAT instances that make them tractable. Two characteristic properties seem to be prevalent in the majority of real-world SAT instances, heterogeneous degree distribution and locality. To understand the impact of these two properties on SAT, we study the proof complexity of random -SAT models that allow to control heterogeneity and locality. Our findings show that heterogeneity alone does not make SAT easy as heterogeneous random -SAT instances have superpolynomial resolution size. This implies intractability of these instances for modern SAT-solvers. In contrast, modeling locality with underlying geometry leads to small unsatisfiable subformulas, which can be found within polynomial time

    Angles and devices for quantum approximate optimization

    Get PDF
    A potential application of emerging Noisy Intermediate-Scale Quantum (NISQ) devices is that of approximately solving combinatorial optimization problems. This thesis investigates a gate-based algorithm for this purpose, the Quantum Approximate Optimization Algorithm (QAOA), in two major themes. First, we examine how the QAOA resolves the problems it is designed to solve. We take a statistical view of the algorithm applied to ensembles of problems, first, considering a highly symmetric version of the algorithm, using Grover drivers. In this highly symmetric context, we find a simple dependence of the QAOA state’s expected value on how values of the cost function are distributed. Furthering this theme, we demonstrate that, generally, QAOA performance depends on problem statistics with respect to a metric induced by a chosen driver Hamiltonian. We obtain a method for evaluating QAOA performance on worst-case problems, those of random costs, for differing driver choices. Second, we investigate a QAOA context with device control occurring only via single-qubit gates, rather than using individually programmable one- and two-qubit gates. In this reduced control overhead scheme---the digital-analog scheme---the complexity of devices running QAOA circuits is decreased at the cost of errors which are shown to be non-harmful in certain regimes. We then explore hypothetical device designs one could use for this purpose.Eine mögliche Anwendung für “Noisy Intermediate-Scale Quantum devices” (NISQ devices) ist die näherungsweise Lösung von kombinatorischen Optimierungsproblemen. Die vorliegende Arbeit untersucht anhand zweier Hauptthemen einen gatterbasierten Algorithmus, den sogenannten “Quantum Approximate Optimization Algorithm” (QAOA). Zuerst prüfen wir, wie der QAOA jene Probleme löst, für die er entwickelt wurde. Wir betrachten den Algorithmus in einer Kombination mit hochsymmetrischen Grover-Treibern für statistische Ensembles von Probleminstanzen. In diesem Kontext finden wir eine einfache Abhängigkeit von der Verteilung der Kostenfunktionswerte. Weiterführend zeigen wir, dass die QAOA-Leistung generell von der Problemstatistik in Bezug auf eine durch den gewählten Treiber-Hamiltonian induzierte Metrik abhängt. Wir erhalten eine Methode zur Bewertung der QAOA-Leistung bei schwersten Problemen (solche zufälliger Kosten) für unterschiedliche Treiberauswahlen. Zweitens untersuchen wir eine QAOA-Variante, bei der sich die Hardware- Kontrolle nur auf Ein-Qubit-Gatter anstatt individuell programmierbare Ein- und Zwei-Qubit-Gatter erstreckt. In diesem reduzierten Kontrollaufwandsschema—dem digital-analogen Schema—sinkt die Komplexität der Hardware, welche die QAOASchaltungen ausführt, auf Kosten von Fehlern, die in bestimmten Bereichen als ungefährlich nachgewiesen werden. Danach erkunden wir hypothetische Hardware- Konzepte, die für diesen Zweck genutzt werden könnten

    A DPLL(T) Framework for Verifying Deep Neural Networks

    Full text link
    Deep Neural Networks (DNNs) have emerged as an effective approach to tackling real-world problems. However, like human-written software, automatically-generated DNNs can have bugs and be attacked. This thus attracts many recent interests in developing effective and scalable DNN verification techniques and tools. In this work, we introduce a NeuralSAT, a new constraint solving approach to DNN verification. The design of NeuralSAT follows the DPLL(T) algorithm used modern SMT solving, which includes (conflict) clause learning, abstraction, and theory solving, and thus NeuralSAT can be considered as an SMT framework for DNNs. Preliminary results show that the NeuralSAT prototype is competitive to the state-of-the-art. We hope, with proper optimization and engineering, NeuralSAT will carry the power and success of modern SAT/SMT solvers to DNN verification. NeuralSAT is avaliable from: https://github.com/dynaroars/neuralsat-solverComment: 27 pages, 8 figures. NeuralSAT is avaliable from: https://github.com/dynaroars/neuralsat-solve

    Prioritized Unit Propagation and Extended Resolution Techniques for SAT Solvers

    Get PDF
    NP-complete problems like the Boolean Satisfiability (SAT) Problem are ubiquitous in computer science, mathematics, and engineering. Consequently, researchers have developed algorithms such as Conflict-Driven Clause-Learning (CDCL) SAT solvers, aimed at determining the satisfiability of Boolean formulas. As the result of decades of research in the development of CDCL SAT solvers, these algorithms solve real-life SAT instances surprisingly quickly, performing well despite the fact that the SAT problem is believed to be intractable in general. While modern CDCL SAT solvers are efficient for many real-world applications, there is continual demand for ever more powerful heuristics for newer applications. This demand in turn provides the impetus for research in solver heuristics. In this thesis, we address this need by proposing a new heuristic for Boolean Constraint Propagation (BCP), a key component of CDCL SAT solvers, and a novel, extensible, architectural design of an Extended Resolution (ER) SAT solver, a class of solvers that is more powerful than CDCL solvers. The impressive performance of CDCL SAT solvers on real-life Boolean instances is, in part, made possible by a combination of logical reasoning rules and heuristics integrated into different components of the solvers. Given that such combinations are currently the most successful paradigm in SAT solving, it is natural to ask how such combinations can be made even more efficient. We observe that there are two different approaches that can be taken to improve SAT solvers: one approach is to modify individual components within the SAT solving algorithm, and the other approach is to change the overall structure of the algorithm. We explore both approaches in this thesis. Following the first approach, we examine a critical component of CDCL: the Boolean Constraint Propagation (BCP) algorithm, which systematically finds implications of variable assignments made by the solver. In most implementations of BCP, variable values are propagated greedily -- the values of implied variables are set immediately after they are detected. This observation suggests that there could be a smarter way to perform BCP by prioritizing part of the search space rather than propagating implied variables immediately after they are encountered. In this work, we develop an algorithm which allows BCP to prioritize propagations, choose a heuristic priority ordering of the variables, and demonstrate a class of instances where our prioritized BCP algorithm, combined with this heuristic ordering, is able to outperform the traditional BCP algorithm. For the second approach, we note that solvers are fundamentally mathematical proof systems, and that CDCL produces proofs in the Resolution proof system, which is theoretically weaker than Extended Resolution (ER), a related proof system. Hence, it is natural to try integrating ER techniques into the CDCL algorithm, thus rendering it more powerful. However, it is well known that automating the ER proof system deterministically can be very challenging. Instead of proposing a single set of techniques to implement the ER proof system, we develop a programmatic framework (and an associated set of techniques) that enables one to upgrade CDCL solvers into an ER-based SAT solver. More precisely, we add three new major programmatic components: extension variable addition, extension variable substitution, and extension variable deletion. These components can be easily extended to test various ER ideas and heuristics. One of our considered heuristics is shown to be generally competitive with the baseline CDCL solver while improving upon the baseline for a specific class of cryptographic instances

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems

    Monte Carlo Forest Search: UNSAT Solver Synthesis via Reinforcement learning

    Full text link
    We introduce Monte Carlo Forest Search (MCFS), an offline algorithm for automatically synthesizing strong tree-search solvers for proving \emph{unsatisfiability} on given distributions, leveraging ideas from the Monte Carlo Tree Search (MCTS) algorithm that led to breakthroughs in AlphaGo. The crucial difference between proving unsatisfiability and existing applications of MCTS, is that policies produce trees rather than paths. Rather than finding a good path (solution) within a tree, the search problem becomes searching for a small proof tree within a forest of candidate proof trees. We introduce two key ideas to adapt to this setting. First, we estimate tree size with paths, via the unbiased approximation from Knuth (1975). Second, we query a strong solver at a user-defined depth rather than learning a policy across the whole tree, in order to focus our policy search on early decisions, which offer the greatest potential for reducing tree size. We then present MCFS-SAT, an implementation of MCFS for learning branching policies for solving the Boolean satisfiability (SAT) problem that required many modifications from AlphaGo. We matched or improved performance over a strong baseline on two well-known SAT distributions (\texttt{sgen}, \texttt{random}). Notably, we improved running time by 9\% on \texttt{sgen} over the \texttt{kcnfs} solver and even further over the strongest UNSAT solver from the 2021 SAT competition

    Anytime algorithms for ROBDD symmetry detection and approximation

    Get PDF
    Reduced Ordered Binary Decision Diagrams (ROBDDs) provide a dense and memory efficient representation of Boolean functions. When ROBDDs are applied in logic synthesis, the problem arises of detecting both classical and generalised symmetries. State-of-the-art in symmetry detection is represented by Mishchenko's algorithm. Mishchenko showed how to detect symmetries in ROBDDs without the need for checking equivalence of all co-factor pairs. This work resulted in a practical algorithm for detecting all classical symmetries in an ROBDD in O(|G|³) set operations where |G| is the number of nodes in the ROBDD. Mishchenko and his colleagues subsequently extended the algorithm to find generalised symmetries. The extended algorithm retains the same asymptotic complexity for each type of generalised symmetry. Both the classical and generalised symmetry detection algorithms are monolithic in the sense that they only return a meaningful answer when they are left to run to completion. In this thesis we present efficient anytime algorithms for detecting both classical and generalised symmetries, that output pairs of symmetric variables until a prescribed time bound is exceeded. These anytime algorithms are complete in that given sufficient time they are guaranteed to find all symmetric pairs. Theoretically these algorithms reside in O(n³+n|G|+|G|³) and O(n³+n²|G|+|G|³) respectively, where n is the number of variables, so that in practice the advantage of anytime generality is not gained at the expense of efficiency. In fact, the anytime approach requires only very modest data structure support and offers unique opportunities for optimisation so the resulting algorithms are very efficient. The thesis continues by considering another class of anytime algorithms for ROBDDs that is motivated by the dearth of work on approximating ROBDDs. The need for approximation arises because many ROBDD operations result in an ROBDD whose size is quadratic in the size of the inputs. Furthermore, if ROBDDs are used in abstract interpretation, the running time of the analysis is related not only to the complexity of the individual ROBDD operations but also the number of operations applied. The number of operations is, in turn, constrained by the number of times a Boolean function can be weakened before stability is achieved. This thesis proposes a widening that can be used to both constrain the size of an ROBDD and also ensure that the number of times that it is weakened is bounded by some given constant. The widening can be used to either systematically approximate an ROBDD from above (i.e. derive a weaker function) or below (i.e. infer a stronger function). The thesis also considers how randomised techniques may be deployed to improve the speed of computing an approximation by avoiding potentially expensive ROBDD manipulation

    Hide and Seek: Scaling Machine Learning for Combinatorial Optimization via the Probabilistic Method

    Full text link
    Applying deep learning to solve real-life instances of hard combinatorial problems has tremendous potential. Research in this direction has focused on the Boolean satisfiability (SAT) problem, both because of its theoretical centrality and practical importance. A major roadblock faced, though, is that training sets are restricted to random formulas of size several orders of magnitude smaller than formulas of practical interest, raising serious concerns about generalization. This is because labeling random formulas of increasing size rapidly becomes intractable. By exploiting the probabilistic method in a fundamental way, we remove this roadblock entirely: we show how to generate correctly labeled random formulas of any desired size, without having to solve the underlying decision problem. Moreover, the difficulty of the classification task for the formulas produced by our generator is tunable by varying a simple scalar parameter. This opens up an entirely new level of sophistication for the machine learning methods that can be brought to bear on Satisfiability. Using our generator, we train existing state-of-the-art models for the task of predicting satisfiability on formulas with 10,000 variables. We find that they do no better than random guessing. As a first indication of what can be achieved with the new generator, we present a novel classifier that performs significantly better than random guessing 99% on the same datasets, for most difficulty levels. Crucially, unlike past approaches that learn based on syntactic features of a formula, our classifier performs its learning on a short prefix of a solver's computation, an approach that we expect to be of independent interest
    • …
    corecore