554 research outputs found

    Improved Exact Algorithms for Mildly Sparse Instances of Max SAT

    Get PDF
    We present improved exponential time exact algorithms for Max SAT. Our algorithms run in time of the form O(2^{(1-mu(c))n}) for instances with n variables and m=cn clauses. In this setting, there are three incomparable currently best algorithms: a deterministic exponential space algorithm with mu(c)=1/O(c * log(c)) due to Dantsin and Wolpert [SAT 2006], a randomized polynomial space algorithm with mu(c)=1/O(c * log^3(c)) and a deterministic polynomial space algorithm with mu(c)=1/O(c^2 * log^2(c)) due to Sakai, Seto and Tamaki [Theory Comput. Syst., 2015]. Our first result is a deterministic polynomial space algorithm with mu(c)=1/O(c * log(c)) that achieves the previous best time complexity without exponential space or randomization. Furthermore, this algorithm can handle instances with exponentially large weights and hard constraints. The previous algorithms and our deterministic polynomial space algorithm run super-polynomially faster than 2^n only if m=O(n^2). Our second results are deterministic exponential space algorithms for Max SAT with mu(c)=1/O((c * log(c))^{2/3}) and for Max 3-SAT with mu(c)=1/O(c^{1/2}) that run super-polynomially faster than 2^n when m=o(n^{5/2}/log^{5/2}(n)) and m=o(n^3/log^2(n)) respectively

    Separate, measure and conquer: faster polynomial-space algorithms for Max 2-CSP and counting dominating sets

    Get PDF
    We show a method resulting in the improvement of several polynomial-space, exponential-time algorithms. The method capitalizes on the existence of small balanced separators for sparse graphs, which can be exploited for branching to disconnect an instance into independent components. For this algorithm design paradigm, the challenge to date has been to obtain improvements in worst-case analyses of algorithms, compared with algorithms that are analyzed with advanced methods, such as Measure and Conquer. Our contribution is the design of a general method to integrate the advantage from the separator-branching into Measure and Conquer, for an improved running time analysis. We illustrate the method with improved algorithms for Max (r,2) -CSP and #Dominating Set. For Max (r,2) -CSP instances with domain size r and m constraints, the running time improves from r m/6 to r m/7.5 for cubic instances and from r 0.19â‹…m to r 0.18â‹…m for general instances, omitting subexponential factors. For #Dominating Set instances with n vertices, the running time improves from 1.4143 n to 1.2458 n for cubic instances and from 1.5673 n to 1.5183 n for general instances. It is likely that other algorithms relying on local transformations can be improved using our method, which exploits a non-local property of graphs

    Mapping constrained optimization problems to quantum annealing with application to fault diagnosis

    Get PDF
    Current quantum annealing (QA) hardware suffers from practical limitations such as finite temperature, sparse connectivity, small qubit numbers, and control error. We propose new algorithms for mapping boolean constraint satisfaction problems (CSPs) onto QA hardware mitigating these limitations. In particular we develop a new embedding algorithm for mapping a CSP onto a hardware Ising model with a fixed sparse set of interactions, and propose two new decomposition algorithms for solving problems too large to map directly into hardware. The mapping technique is locally-structured, as hardware compatible Ising models are generated for each problem constraint, and variables appearing in different constraints are chained together using ferromagnetic couplings. In contrast, global embedding techniques generate a hardware independent Ising model for all the constraints, and then use a minor-embedding algorithm to generate a hardware compatible Ising model. We give an example of a class of CSPs for which the scaling performance of D-Wave's QA hardware using the local mapping technique is significantly better than global embedding. We validate the approach by applying D-Wave's hardware to circuit-based fault-diagnosis. For circuits that embed directly, we find that the hardware is typically able to find all solutions from a min-fault diagnosis set of size N using 1000N samples, using an annealing rate that is 25 times faster than a leading SAT-based sampling method. Further, we apply decomposition algorithms to find min-cardinality faults for circuits that are up to 5 times larger than can be solved directly on current hardware.Comment: 22 pages, 4 figure

    08431 Abstracts Collection -- Moderately Exponential Time Algorithms

    Get PDF
    From 19/10/200819/10/2008 to 24/10/200824/10/2008, the Dagstuhl Seminar 08431 ``Moderately Exponential Time Algorithms \u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    From the Ising and Potts models to the general graph homomorphism polynomial

    Full text link
    In this note we study some of the properties of the generating polynomial for homomorphisms from a graph to at complete weighted graph on qq vertices. We discuss how this polynomial relates to a long list of other well known graph polynomials and the partition functions for different spin models, many of which are specialisations of the homomorphism polynomial. We also identify the smallest graphs which are not determined by their homomorphism polynomials for q=2q=2 and q=3q=3 and compare this with the corresponding minimal examples for the UU-polynomial, which generalizes the well known Tutte-polynomal.Comment: V2. Extended versio

    Separate, measure and conquer: faster polynomial-space algorithms for Max 2-CSP and counting dominating sets

    Get PDF
    We show a method resulting in the improvement of several polynomial-space, exponential-time algorithms. The method capitalizes on the existence of small balanced separators for sparse graphs, which can be exploited for branching to disconnect an instance into independent components. For this algorithm design paradigm, the challenge to date has been to obtain improvements in worst-case analyses of algorithms, compared with algorithms that are analyzed with advanced methods, notably Measure and Conquer. Our contribution is the design of a general method to integrate the advantage from the separator-branching into Measure and Conquer, for a more precise and improved running time analysi

    qTorch: The Quantum Tensor Contraction Handler

    Full text link
    Classical simulation of quantum computation is necessary for studying the numerical behavior of quantum algorithms, as there does not yet exist a large viable quantum computer on which to perform numerical tests. Tensor network (TN) contraction is an algorithmic method that can efficiently simulate some quantum circuits, often greatly reducing the computational cost over methods that simulate the full Hilbert space. In this study we implement a tensor network contraction program for simulating quantum circuits using multi-core compute nodes. We show simulation results for the Max-Cut problem on 3- through 7-regular graphs using the quantum approximate optimization algorithm (QAOA), successfully simulating up to 100 qubits. We test two different methods for generating the ordering of tensor index contractions: one is based on the tree decomposition of the line graph, while the other generates ordering using a straight-forward stochastic scheme. Through studying instances of QAOA circuits, we show the expected result that as the treewidth of the quantum circuit's line graph decreases, TN contraction becomes significantly more efficient than simulating the whole Hilbert space. The results in this work suggest that tensor contraction methods are superior only when simulating Max-Cut/QAOA with graphs of regularities approximately five and below. Insight into this point of equal computational cost helps one determine which simulation method will be more efficient for a given quantum circuit. The stochastic contraction method outperforms the line graph based method only when the time to calculate a reasonable tree decomposition is prohibitively expensive. Finally, we release our software package, qTorch (Quantum TensOR Contraction Handler), intended for general quantum circuit simulation.Comment: 21 pages, 8 figure

    Lower bounds on the size of semidefinite programming relaxations

    Full text link
    We introduce a method for proving lower bounds on the efficacy of semidefinite programming (SDP) relaxations for combinatorial problems. In particular, we show that the cut, TSP, and stable set polytopes on nn-vertex graphs are not the linear image of the feasible region of any SDP (i.e., any spectrahedron) of dimension less than 2nc2^{n^c}, for some constant c>0c > 0. This result yields the first super-polynomial lower bounds on the semidefinite extension complexity of any explicit family of polytopes. Our results follow from a general technique for proving lower bounds on the positive semidefinite rank of a matrix. To this end, we establish a close connection between arbitrary SDPs and those arising from the sum-of-squares SDP hierarchy. For approximating maximum constraint satisfaction problems, we prove that SDPs of polynomial-size are equivalent in power to those arising from degree-O(1)O(1) sum-of-squares relaxations. This result implies, for instance, that no family of polynomial-size SDP relaxations can achieve better than a 7/8-approximation for MAX-3-SAT
    • …
    corecore