10 research outputs found

    The complexity of Boolean formula minimization

    Get PDF
    The Minimum Equivalent Expression problem is a natural optimization problem in the second level of the Polynomial-Time Hierarchy. It has long been conjectured to be Σ^P_2-complete and indeed appears as an open problem in Garey and Johnson (1979) [5]. The depth-2 variant was only shown to be Σ^P_2-complete in 1998 (Umans (1998) [13], Umans (2001) [15]) and even resolving the complexity of the depth-3 version has been mentioned as a challenging open problem. We prove that the depth-k version is Σ^P_2-complete under Turing reductions for all k ≥ 3. We also settle the complexity of the original, unbounded depth Minimum Equivalent Expression problem, by showing that it too is Σ^P_2-complete under Turing reductions

    Arithmetic Expression Construction

    Get PDF
    When can nn given numbers be combined using arithmetic operators from a given subset of {+,,×,÷}\{+, -, \times, \div\} to obtain a given target number? We study three variations of this problem of Arithmetic Expression Construction: when the expression (1) is unconstrained; (2) has a specified pattern of parentheses and operators (and only the numbers need to be assigned to blanks); or (3) must match a specified ordering of the numbers (but the operators and parenthesization are free). For each of these variants, and many of the subsets of {+,,×,÷}\{+,-,\times,\div\}, we prove the problem NP-complete, sometimes in the weak sense and sometimes in the strong sense. Most of these proofs make use of a "rational function framework" which proves equivalence of these problems for values in rational functions with values in positive integers.Comment: 36 pages, 5 figures. Full version of paper accepted to 31st International Symposium on Algorithms and Computation (ISAAC 2020

    Explainability via Short Formulas: the Case of Propositional Logic with Implementation

    Get PDF
    We conceptualize explainability in terms of logic and formula size, giving a number of related definitions of explainability in a very general setting. Our main interest is the so-called special explanation problem which aims to explain the truth value of an input formula in an input model. The explanation is a formula of minimal size that (1) agrees with the input formula on the input model and (2) transmits the involved truth value to the input formula globally, i.e., on every model. As an important example case, we study propositional logic in this setting and show that the special explainability problem is complete for the second level of the polynomial hierarchy. We also provide an implementation of this problem in answer set programming and investigate its capacity in relation to explaining answers to the n-queens and dominating set problems.publishedVersionPeer reviewe

    Optimizing Query Predicates with Disjunctions for Column Stores

    Full text link
    Since its inception, database research has given limited attention to optimizing predicates with disjunctions. What little past work there is has focused on optimizations for traditional row-oriented databases. A key difference in predicate evaluation for row stores and column stores is that while row stores apply predicates to one record at a time, column stores apply predicates to sets of records. Not only must the execution engine decide the order in which to apply the predicates, but it must also decide how many times each predicate should be applied and on which sets of records it should be applied to. In our work, we tackle exactly this problem. We formulate, analyze, and solve the predicate evaluation problem for column stores. Our results include proofs about various properties of the problem, and in turn, these properties have allowed us to derive the first polynomial-time (i.e., O(n log n)) algorithm ShallowFish which evaluates predicates optimally for all predicate expressions with a depth of 2 or less. We capture the exact property which makes the problem more difficult for predicate expressions of depth 3 or greater and propose an approximate algorithm DeepFish which outperforms ShallowFish in these situations. Finally, we show that both ShallowFish and DeepFish outperform the corresponding state of the art by two orders of magnitude

    Classifying Problems into Complexity Classes

    Full text link
    A fundamental problem in computer science is, stated informally: Given a problem, how hard is it?. We measure hardness by looking at the following question: Given a set A whats is the fastest algorithm to determine if “x ∈ A? ” We measure the speed of an algorithm by how long it takes to run on inputs of length n, as a function of n. For example, sorting a list of length n can be done in roughly n log n steps. Obtaining a fast algorithm is only half of the problem. Can you prove that there is no better algorithm? This is notoriously difficult; however, we can classify problems into complexity classes where those in the same class are roughly equally hard. In this chapter we define many complexity classes and describing natural problems that are in them. Our classes go all the way from regular languages to various shades of undecidable. We then summarize all that is known about these classes.

    Optimizing Implementations of Lightweight Building Blocks

    Get PDF
    We study the synthesis of small functions used as building blocks in lightweight cryptographic designs in terms of hardware implementations. This phase most notably appears during the ASIC implementation of cryptographic primitives. The quality of this step directly affects the output circuit, and while general tools exist to carry out this task, most of them belong to proprietary software suites and apply heuristics to any size of functions. In this work, we focus on small functions (4- and 8-bit mappings) and look for their optimal implementations on a specific weighted instructions set which allows fine tuning of the technology. We propose a tool named LIGHTER, based on two related algorithms, that produces optimized implementations of small functions. To demonstrate the validity and usefulness of our tool, we applied it to two practical cases: first, linear permutations that define diffusion in most of SPN ciphers; second, non-linear 4-bit permutations that are used in many lightweight block ciphers. For linear permutations, we exhibit several new MDS diffusion matrices lighter than the state-of-the-art, and we also decrease the implementation cost of several already known MDS matrices. As for non-linear permutations, LIGHTER outperforms the area-optimized synthesis of the state-of-the-art academic tool ABC. Smaller circuits can also be reached when ABC and LIGHTER are used jointly

    Una aproximación a la optimización de algoritmos mediante el uso de minimización de funciones booleanas

    Get PDF
    Este proyecto explora la posibilidad de optimizar algoritmos utilizando técnicas de minimización de funciones booleanas. La idea de partida es que expresar un programa a muy bajo nivel permitirá localizar y eliminar redundancia. Para ello se trabaja con operaciones bit a bit de lógica booleana. Usamos únicamente la función NAND para expresar cualquier otra función gracias a su propiedad de completitud funcional. Expresar un algoritmo de esta forma nos permite, por un lado, tener una medida del coste del algoritmo en funciones NAND y, por otro, paralelizarlo. Mediante minimización, se puede optimizar un circuito lógico equivalente a un fragmento de código secuencial, que no tenga bucles ni recursividad. Para ello se ha desarrollado una técnica propia de minimización rápida. Se han desarrollado técnicas para este proyecto que permiten aplicar la minimización a algoritmos recursivos. De este modo se eliminan, por ejemplo, operaciones repetidas en diferentes iteraciones de un bucle. Para llevar a cabo este trabajo se ha desarrollado una notación propia, parecida a un lenguaje ensamblador, que permite trabajar con funciones lógicas y recursividad. Se ha creado una base de datos dónde se definen las funciones recursivas, que pueden representar desde una puerta lógica hasta un algoritmo como el de la suma. Se han implementado los métodos de optimización de estas funciones recursivas y un método de evaluación, mediante el que se ejecutan para comprobar que son correctas. También se han implementado una serie de utilidades para, por ejemplo, traducir entre diferentes notaciones. Finalmente se han comparado los resultados con el algoritmo sin optimizar y con la solución que nos ofrecerían otras herramientas

    Formal Methods in Quantum Circuit Design

    Get PDF
    The design and compilation of correct, efficient quantum circuits is integral to the future operation of quantum computers. This thesis makes contributions to the problems of optimizing and verifying quantum circuits, with an emphasis on the development of formal models for such purposes. We also present software implementations of these methods, which together form a full stack of tools for the design of optimized, formally verified quantum oracles. On the optimization side, we study methods for the optimization of Rz and CNOT gates in Clifford+Rz circuits. We develop a general, efficient optimization algorithm called phase folding, which reduces the number of Rz gates without increasing any metrics by computing its phase polynomial. This algorithm can further be combined with synthesis techniques for CNOT-dihedral operators to optimize circuits with respect to particular costs. We then study the optimal synthesis problem for CNOT-dihedral operators from the perspectives of Rz and CNOT gate optimization. In the case of Rz gate optimization, we show that the optimal synthesis problem is polynomial-time equivalent to minimum-distance decoding in certain Reed-Muller codes. For the CNOT optimization problem, we show that the optimal synthesis problem is at least as hard as a combinatorial problem related to Gray codes. In both cases, we develop heuristics for the optimal synthesis problem, which together with phase folding reduces T counts by 42% and CNOT counts by 22% across a suite of real-world benchmarks. From the perspective of formal verification, we make two contributions. The first is the development of a formal model of quantum circuits with ancillary bits based on the Feynman path integral, along with a concrete verification algorithm. The path integral model, with some syntactic sugar, further doubles as a natural specification language for quantum computations. Our experiments show some practical circuits with up to hundreds of qubits can be efficiently verified. Our second contribution is a formally verified, optimizing compiler for reversible circuits. The compiler compiles a classical, irreversible language to reversible circuits, with a formal, machine-checked proof of correctness written in the proof assistant F*. The compiler is structured as a partial evaluator, allowing verification to be carried out significantly faster than previous results
    corecore