100 research outputs found

    Solving Hard Computational Problems Efficiently: Asymptotic Parametric Complexity 3-Coloring Algorithm

    Get PDF
    Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways.In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present only when it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to "efficiently" solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter α∈N\alpha\in\mathbb{N}. Nevertheless, here it is proved that the probability of requiring a value of α>k\alpha>k to obtain a solution for a random graph decreases exponentially: P(α>k)≀2−(k+1)P(\alpha>k) \leq 2^{-(k+1)}, making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results.Comment: Working pape

    VLSI Routing for Advanced Technology

    Get PDF
    Routing is a major step in VLSI design, the design process of complex integrated circuits (commonly known as chips). The basic task in routing is to connect predetermined locations on a chip (pins) with wires which serve as electrical connections. One main challenge in routing for advanced chip technology is the increasing complexity of design rules which reflect manufacturing requirements. In this thesis we investigate various aspects of this challenge. First, we consider polygon decomposition problems in the context of VLSI design rules. We introduce different width notions for polygons which are important for width-dependent design rules in VLSI routing, and we present efficient algorithms for computing width-preserving decompositions of rectilinear polygons into rectangles. Such decompositions are used in routing to allow for fast design rule checking. A main contribution of this thesis is an O(n) time algorithm for computing a decomposition of a simple rectilinear polygon with n vertices into O(n) rectangles, preseverving two-dimensional width. Here the two-dimensional width at a point of the polygon is defined as the edge length of a largest square that contains the point and is contained in the polygon. In order to obtain these results we establish a connection between such decompositions and Voronoi diagrams. Furthermore, we consider implications of multiple patterning and other advanced design rules for VLSI routing. The main contribution in this context is the detailed description of a routing approach which is able to manage such advanced design rules. As a main algorithmic concept we use multi-label shortest paths where certain path properties (which model design rules) can be enforced by defining labels assigned to path vertices and allowing only certain label transitions. The described approach has been implemented in BonnRoute, a VLSI routing tool developed at the Research Institute for Discrete Mathematics, University of Bonn, in cooperation with IBM. We present experimental results confirming that a flow combining BonnRoute and an external cleanup step produces far superior results compared to an industry standard router. In particular, our proposed flow runs more than twice as fast, reduces the via count by more than 20%, the wiring length by more than 10%, and the number of remaining design rule errors by more than 60%. These results obtained by applying our multiple patterning approach to real-world chip instances provided by IBM are another main contribution of this thesis. We note that IBM uses our proposed combined BonnRoute flow as the default tool for signal routing

    Proceedings of SAT Competition 2018 : Solver and Benchmark Descriptions

    Get PDF
    Non peer reviewe

    Dagstuhl Reports : Volume 1, Issue 2, February 2011

    Get PDF
    Online Privacy: Towards Informational Self-Determination on the Internet (Dagstuhl Perspectives Workshop 11061) : Simone Fischer-HĂŒbner, Chris Hoofnagle, Kai Rannenberg, Michael Waidner, Ioannis Krontiris and Michael Marhöfer Self-Repairing Programs (Dagstuhl Seminar 11062) : Mauro PezzĂ©, Martin C. Rinard, Westley Weimer and Andreas Zeller Theory and Applications of Graph Searching Problems (Dagstuhl Seminar 11071) : Fedor V. Fomin, Pierre Fraigniaud, Stephan Kreutzer and Dimitrios M. Thilikos Combinatorial and Algorithmic Aspects of Sequence Processing (Dagstuhl Seminar 11081) : Maxime Crochemore, Lila Kari, Mehryar Mohri and Dirk Nowotka Packing and Scheduling Algorithms for Information and Communication Services (Dagstuhl Seminar 11091) Klaus Jansen, Claire Mathieu, Hadas Shachnai and Neal E. Youn

    New Techniques for Polynomial System Solving

    Get PDF
    Since any encryption map may be viewed as a polynomial map between finite dimensional vector spaces over finite fields, the security of a cryptosystem can be examined by studying the difficulty of solving large systems of multivariate polynomial equations. Therefore, algebraic attacks lead to the task of solving polynomial systems over finite fields. In this thesis, we study several new algebraic techniques for polynomial system solving over finite fields, especially over the finite field with two elements. Instead of using traditional Gröbner basis techniques we focus on highly developed methods from several other areas like linear algebra, discrete optimization, numerical analysis and number theory. We study some techniques from combinatorial optimization to transform a polynomial system solving problem into a (sparse) linear algebra problem. We highlight two new kinds of hybrid techniques. The first kind combines the concept of transforming combinatorial infeasibility proofs to large systems of linear equations and the concept of mutants (finding special lower degree polynomials). The second kind uses the concept of mutants to optimize the Border Basis Algorithm. We study recent suggestions of transferring a system of polynomial equations over the finite field with two elements into a system of polynomial equalities and inequalities over the set of integers (respectively over the set of reals). In particular, we develop several techniques and strategies for converting the polynomial system of equations over the field with two elements to a polynomial system of equalities and inequalities over the reals (respectively over the set of integers). This enables us to make use of several algorithms in the field of discrete optimization and number theory. Furthermore, this also enables us to investigate the use of numerical analysis techniques such as the homotopy continuation methods and Newton's method. In each case several conversion techniques have been developed, optimized and implemented. Finally, the efficiency of the developed techniques and strategies is examined using standard cryptographic examples such as CTC and HFE. Our experimental results show that most of the techniques developed are highly competitive to state-of-the-art algebraic techniques

    36th International Symposium on Theoretical Aspects of Computer Science: STACS 2019, March 13-16, 2019, Berlin, Germany

    Get PDF

    Design and Analysis of Opaque Signatures

    Get PDF
    Digital signatures were introduced to guarantee the authenticity and integrity of the underlying messages. A digital signature scheme comprises the key generation, the signature, and the verification algorithms. The key generation algorithm creates the signing and the verifying keys, called also the signer’s private and public keys respectively. The signature algorithm, which is run by the signer, produces a signature on the input message. Finally, the verification algorithm, run by anyone who knows the signer’s public key, checks whether a purported signature on some message is valid or not. The last property, namely the universal verification of digital signatures is undesirable in situations where the signed data is commercially or personally sensitive. Therefore, mechanisms which share most properties with digital signatures except for the universal verification were invented to respond to the aforementioned need; we call such mechanisms “opaque signatures”. In this thesis, we study the signatures where the verification cannot be achieved without the cooperation of a specific entity, namely the signer in case of undeniable signatures, or the confirmer in case of confirmer signatures; we make three main contributions. We first study the relationship between two security properties important for public key encryption, namely data privacy and key privacy. Our study is motivated by the fact that opaque signatures involve always an encryption layer that ensures their opacity. The properties required for this encryption vary according to whether we want to protect the identity (i.e. the key) of the signer or hide the validity of the signature. Therefore, it would be convenient to use existing work about the encryption scheme in order to derive one notion from the other. Next, we delve into the generic constructions of confirmer signatures from basic cryptographic primitives, e.g. digital signatures, encryption, or commitment schemes. In fact, generic constructions give easy-to-understand and easy-to-prove schemes, however, this convenience is often achieved at the expense of efficiency. In this contribution, which constitutes the core of this thesis, we first analyze the already existing constructions; our study concludes that the popular generic constructions of confirmer signatures necessitate strong security assumptions on the building blocks, which impacts negatively the efficiency of the resulting signatures. Next, we show that a small change in these constructionsmakes these assumptions drop drastically, allowing as a result constructions with instantiations that compete with the dedicated realizations of these signatures. Finally, we revisit two early undeniable signatures which were proposed with a conjectural security. We disprove the claimed security of the first scheme, and we provide a fix to it in order to achieve strong security properties. Next, we upgrade the second scheme so that it supports a iii desirable feature, and we provide a formal security treatment of the new scheme: we prove that it is secure assuming new reasonable assumptions on the underlying constituents

    Quantum Algorithms for Scientific Computing and Approximate Optimization

    Get PDF
    Quantum computation appears to offer significant advantages over classical computation and this has generated a tremendous interest in the field. In this thesis we study the application of quantum computers to computational problems in science and engineering, and to combinatorial optimization problems. We outline the results below. Algorithms for scientific computing require modules, i.e., building blocks, implementing elementary numerical functions that have well-controlled numerical error, are uniformly scalable and reversible, and that can be implemented efficiently. We derive quantum algorithms and circuits for computing square roots, logarithms, and arbitrary fractional powers, and derive worst-case error and cost bounds. We describe a modular approach to quantum algorithm design as a first step towards numerical standards and mathematical libraries for quantum scientific computing. A fundamental but computationally hard problem in physics is to solve the time-independent Schrödinger equation. This is accomplished by computing the eigenvalues of the corresponding Hamiltonian operator. The eigenvalues describe the different energy levels of a system. The cost of classical deterministic algorithms computing these eigenvalues grows exponentially with the number of system degrees of freedom. The number of degrees of freedom is typically proportional to the number of particles in a physical system. We show an efficient quantum algorithm for approximating a constant number of low-order eigenvalues of a Hamiltonian using a perturbation approach. We apply this algorithm to a special case of the Schrödinger equation and show that our algorithm succeeds with high probability, and has cost that scales polynomially with the number of degrees of freedom and the reciprocal of the desired accuracy. This improves and extends earlier results on quantum algorithms for estimating the ground state energy. We consider the simulation of quantum mechanical systems on a quantum computer. We show a novel divide and conquer approach for Hamiltonian simulation. Using the Hamiltonian structure, we can obtain faster simulation algorithms. Considering a sum of Hamiltonians we split them into groups, simulate each group separately, and combine the partial results. Simulation is customized to take advantage of the properties of each group, and hence yield refined bounds to the overall simulation cost. We illustrate our results using the electronic structure problem of quantum chemistry, where we obtain significantly improved cost estimates under mild assumptions. We turn to combinatorial optimization problems. An important open question is whether quantum computers provide advantages for the approximation of classically hard combinatorial problems. A promising recently proposed approach of Farhi et al. is the Quantum Approximate Optimization Algorithm (QAOA). We study the application of QAOA to the Maximum Cut problem, and derive analytic performance bounds for the lowest circuit-depth realization, for both general and special classes of graphs. Along the way, we develop a general procedure for analyzing the performance of QAOA for other problems, and show an example demonstrating the difficulty of obtaining similar results for greater depth. We show a generalization of QAOA and its application to wider classes of combinatorial optimization problems, in particular, problems with feasibility constraints. We introduce the Quantum Alternating Operator Ansatz, which utilizes more general unitary operators than the original QAOA proposal. Our framework facilitates low-resource implementations for many applications which may be particularly suitable for early quantum computers. We specify design criteria, and develop a set of results and tools for mapping diverse problems to explicit quantum circuits. We derive constructions for several important prototypical problems including Maximum Independent Set, Graph Coloring, and the Traveling Salesman problem, and show appealing resource cost estimates for their implementations
    • 

    corecore