258 research outputs found

    Exponential Time Complexity of the Permanent and the Tutte Polynomial

    Get PDF
    We show conditional lower bounds for well-studied #P-hard problems: (a) The number of satisfying assignments of a 2-CNF formula with n variables cannot be counted in time exp(o(n)), and the same is true for computing the number of all independent sets in an n-vertex graph. (b) The permanent of an n x n matrix with entries 0 and 1 cannot be computed in time exp(o(n)). (c) The Tutte polynomial of an n-vertex multigraph cannot be computed in time exp(o(n)) at most evaluation points (x,y) in the case of multigraphs, and it cannot be computed in time exp(o(n/polylog n)) in the case of simple graphs. Our lower bounds are relative to (variants of) the Exponential Time Hypothesis (ETH), which says that the satisfiability of n-variable 3-CNF formulas cannot be decided in time exp(o(n)). We relax this hypothesis by introducing its counting version #ETH, namely that the satisfying assignments cannot be counted in time exp(o(n)). In order to use #ETH for our lower bounds, we transfer the sparsification lemma for d-CNF formulas to the counting setting

    Exponential Time Complexity of Weighted Counting of Independent Sets

    Full text link
    We consider weighted counting of independent sets using a rational weight x: Given a graph with n vertices, count its independent sets such that each set of size k contributes x^k. This is equivalent to computation of the partition function of the lattice gas with hard-core self-repulsion and hard-core pair interaction. We show the following conditional lower bounds: If counting the satisfying assignments of a 3-CNF formula in n variables (#3SAT) needs time 2^{\Omega(n)} (i.e. there is a c>0 such that no algorithm can solve #3SAT in time 2^{cn}), counting the independent sets of size n/3 of an n-vertex graph needs time 2^{\Omega(n)} and weighted counting of independent sets needs time 2^{\Omega(n/log^3 n)} for all rational weights x\neq 0. We have two technical ingredients: The first is a reduction from 3SAT to independent sets that preserves the number of solutions and increases the instance size only by a constant factor. Second, we devise a combination of vertex cloning and path addition. This graph transformation allows us to adapt a recent technique by Dell, Husfeldt, and Wahlen which enables interpolation by a family of reductions, each of which increases the instance size only polylogarithmically.Comment: Introduction revised, differences between versions of counting independent sets stated more precisely, minor improvements. 14 page

    Multi-core computation of transfer matrices for strip lattices in the Potts model

    Full text link
    The transfer-matrix technique is a convenient way for studying strip lattices in the Potts model since the compu- tational costs depend just on the periodic part of the lattice and not on the whole. However, even when the cost is reduced, the transfer-matrix technique is still an NP-hard problem since the time T(|V|, |E|) needed to compute the matrix grows ex- ponentially as a function of the graph width. In this work, we present a parallel transfer-matrix implementation that scales performance under multi-core architectures. The construction of the matrix is based on several repetitions of the deletion- contraction technique, allowing parallelism suitable to multi-core machines. Our experimental results show that the multi-core implementation achieves speedups of 3.7X with p = 4 processors and 5.7X with p = 8. The efficiency of the implementation lies between 60% and 95%, achieving the best balance of speedup and efficiency at p = 4 processors for actual multi-core architectures. The algorithm also takes advantage of the lattice symmetry, making the transfer matrix computation to run up to 2X faster than its non-symmetric counterpart and use up to a quarter of the original space

    Sequential Importance Sampling Algorithms for Estimating the All-Terminal Reliability Polynomial of Sparse Graphs

    Get PDF
    The all-terminal reliability polynomial of a graph counts its connected subgraphs of various sizes. Algorithms based on sequential importance sampling (SIS) have been proposed to estimate a graph\u27s reliability polynomial. We show upper bounds on the relative error of three sequential importance sampling algorithms. We use these to create a hybrid algorithm, which selects the best SIS algorithm for a particular graph G and particular coefficient of the polynomial. This hybrid algorithm is particularly effective when G has low degree. For graphs of average degree < 11, it is the fastest known algorithm; for graphs of average degree <= 45 it is the fastest known polynomial-space algorithm. For example, when a graph has average degree 3, this algorithm estimates to error epsilon in time O(1.26^n * epsilon^{-2}). Although the algorithm may take exponential time, in practice it can have good performance even on medium-scale graphs. We provide experimental results that show quite practical performance on graphs with hundreds of vertices and thousands of edges. By contrast, alternative algorithms are either not rigorous or are completely impractical for such large graphs

    Exact solutions for the two- and all-terminal reliabilities of the Brecht-Colbourn ladder and the generalized fan

    Full text link
    The two- and all-terminal reliabilities of the Brecht-Colbourn ladder and the generalized fan have been calculated exactly for arbitrary size as well as arbitrary individual edge and node reliabilities, using transfer matrices of dimension four at most. While the all-terminal reliabilities of these graphs are identical, the special case of identical edge (pp) and node (ρ\rho) reliabilities shows that their two-terminal reliabilities are quite distinct, as demonstrated by their generating functions and the locations of the zeros of the reliability polynomials, which undergo structural transitions at ρ=1/2\rho = \displaystyle {1/2}
    corecore