10 research outputs found

    On the Limits of Sparsification

    Get PDF
    Abstract. Impagliazzo, Paturi and Zane (JCSS 2001) proved a sparsification lemma for k-CNFs: every k-CNF is a sub-exponential size disjunction of k-CNFs with a linear number of clauses. This lemma has subsequently played a key role in the study of the exact complexity of the satisfiability problem. A natural question is whether an analogous structural result holds for CNFs or even for broader non-uniform classes such as constant-depth circuits or Boolean formulae. We prove a very strong negative result in this connection: For every superlinear function f(n), there are CNFs of size f(n) which cannot be written as a disjunction of 2 n−εn CNFs each having a linear number of clauses for any ε> 0. We also give a hierarchy of such non-sparsifiable CNFs: For every k, there is a k ′ for which there are CNFs of size n k′ which cannot be written as a sub-exponential size disjunction of CNFs of size n k. Furthermore, our lower bounds hold not just against CNFs but against an arbitrary famil

    Relating the Time Complexity of Optimization Problems in Light of the Exponential-Time Hypothesis

    Full text link
    Obtaining lower bounds for NP-hard problems has for a long time been an active area of research. Recent algebraic techniques introduced by Jonsson et al. (SODA 2013) show that the time complexity of the parameterized SAT(\cdot) problem correlates to the lattice of strong partial clones. With this ordering they isolated a relation RR such that SAT(RR) can be solved at least as fast as any other NP-hard SAT(\cdot) problem. In this paper we extend this method and show that such languages also exist for the max ones problem (MaxOnes(Γ\Gamma)) and the Boolean valued constraint satisfaction problem over finite-valued constraint languages (VCSP(Δ\Delta)). With the help of these languages we relate MaxOnes and VCSP to the exponential time hypothesis in several different ways.Comment: This is an extended version of Relating the Time Complexity of Optimization Problems in Light of the Exponential-Time Hypothesis, appearing in Proceedings of the 39th International Symposium on Mathematical Foundations of Computer Science MFCS 2014 Budapest, August 25-29, 201

    Size-Treewidth Tradeoffs for Circuits Computing the Element Distinctness Function

    Get PDF
    In this work we study the relationship between size and treewidth of circuits computing variants of the element distinctness function. First, we show that for each n, any circuit of treewidth t computing the element distinctness function delta_n:{0,1}^n -> {0,1} must have size at least Omega((n^2)/(2^{O(t)}*log(n))). This result provides a non-trivial generalization of a super-linear lower bound for the size of Boolean formulas (treewidth 1) due to Neciporuk. Subsequently, we turn our attention to read-once circuits, which are circuits where each variable labels at most one input vertex. For each n, we show that any read-once circuit of treewidth t and size s computing a variant tau_n:{0,1}^n -> {0,1} of the element distinctness function must satisfy the inequality t * log(s) >= Omega(n/log(n)). Using this inequality in conjunction with known results in structural graph theory, we show that for each fixed graph H, read-once circuits computing tau_n which exclude H as a minor must have size at least Omega(n^2/log^{4}(n)). For certain well studied functions, such as the triangle-freeness function, this last lower bound can be improved to Omega(n^2/log^2(n))

    Why are CSPs Based on Partition Schemes Computationally Hard?

    Get PDF
    Many computational problems arising in, for instance, artificial intelligence can be realized as infinite-domain constraint satisfaction problems (CSPs) based on partition schemes: a set of pairwise disjoint binary relations (containing the equality relation) whose union spans the underlying domain and which is closed under converse. We first consider partition schemes that contain a strict partial order and where the constraint language contains all unions of the basic relations; such CSPs are frequently occurring in e.g. temporal and spatial reasoning. We identify three properties of such orders which, when combined, are sufficient to establish NP-hardness of the CSP. This result explains, in a uniform way, many existing hardness results from the literature. More importantly, this result enables us to prove that CSPs of this kind are not solvable in subexponential time unless the exponential-time hypothesis (ETH) fails. We continue by studying constraint languages based on partition schemes but where relations are built using disjunctions instead of unions; such CSPs appear naturally when analysing first-order definable constraint languages. We prove that such CSPs are NP-hard even in very restricted settings and that they are not solvable in subexponential time under the randomised ETH. In certain cases, we can additionally show that they cannot be solved in O(c^n) time for any c >= 0

    Circuit Depth Reductions

    Get PDF
    The best known size lower bounds against unrestricted circuits have remained around 3n3n for several decades. Moreover, the only known technique for proving lower bounds in this model, gate elimination, is inherently limited to proving lower bounds of less than 5n5n. In this work, we propose a non-gate-elimination approach for obtaining circuit lower bounds, via certain depth-three lower bounds. We prove that every (unbounded-depth) circuit of size ss can be expressed as an OR of 2s/3.92^{s/3.9} 1616-CNFs. For DeMorgan formulas, the best known size lower bounds have been stuck at around n3o(1)n^{3-o(1)} for decades. Under a plausible hypothesis about probabilistic polynomials, we show that n4εn^{4-\varepsilon}-size DeMorgan formulas have 2n1Ω(ε)2^{n^{1-\Omega(\varepsilon)}}-size depth-3 circuits which are approximate sums of n1Ω(ε)n^{1-\Omega(\varepsilon)}-degree polynomials over F2{\mathbb F}_2. While these structural results do not immediately lead to new lower bounds, they do suggest new avenues of attack on these longstanding lower bound problems. Our results complement the classical depth-33 reduction results of Valiant, which show that logarithmic-depth circuits of linear size can be computed by an OR of 2εn2^{\varepsilon n} nδn^{\delta}-CNFs, and slightly stronger results for series-parallel circuits. It is known that no purely graph-theoretic reduction could yield interesting depth-3 circuits from circuits of super-logarithmic depth. We overcome this limitation (for small-size circuits) by taking into account both the graph-theoretic and functional properties of circuits and formulas. We show that improvements of the following pseudorandom constructions imply new circuit lower bounds: dispersers for varieties, correlation with constant degree polynomials, matrix rigidity, and hardness for depth-33 circuits with constant bottom fan-in

    Refining complexity analyses in planning by exploiting the exponential time hypothesis

    Get PDF
    The use of computational complexity in planning, and in AI in general, has always been a disputed topic. A major problem with ordinary worst-case analyses is that they do not provide any quantitative information: they do not tell us much about the running time of concrete algorithms, nor do they tell us much about the running time of optimal algorithms. We address problems like this by presenting results based on the exponential time hypothesis (ETH), which is a widely accepted hypothesis concerning the time complexity of 3-SAT. By using this approach, we provide, for instance, almost matching upper and lower bounds onthe time complexity of propositional planning.Funding Agencies|National Graduate School in Computer Science (CUGS), Sweden; Swedish Research Council (VR) [621-2014-4086]</p

    On problems as hard as CNF-SAT

    Get PDF
    The field of exact exponential time algorithms for non-deterministic polynomial-time hard problems has thrived since the mid-2000s. While exhaustive search remains asymptotically the fastest known algorithm for some basic problems, non-trivial exponential time algorithms have been found for a myriad of problems, including GRAPH COLORING, HAMILTONIAN PATH, DOMINATING SET, and 3-CNF-SAT. In some instances, improving these algorithms further seems to be out of reach. The CNF-SAT problem is the canonical example of a problem for which the trivial exhaustive search algorithm runs in time O(2(n)), where n is the number of variables in the input formula. While there exist non-trivial algorithms for CNF-SAT that run in time o(2(n)), no algorithm was able to improve the growth rate 2 to a smaller constant, and hence it is natural to conjecture that 2 is the optimal growth rate. The strong exponential time hypothesis (SETH) by Impagliazzo and Paturi [JCSS 2001] goes a little bit further and asserts that, for every epsilon < 1, there is a (large) integer k such that k-CNF-SAT cannot be computed in time 2(epsilon n). In this article, we show that, for every epsilon < 1, the problems HITTING SET, SET SPLITTING, and NAE-SAT cannot be computed in time O(2(epsilon n)) unless SETH fails. Here n is the number of elements or variables in the input. For these problems, we actually get an equivalence to SETH in a certain sense. We conjecture that SETH implies a similar statement for SET COVER and prove that, under this assumption, the fastest known algorithms for STEINER TREE, CONNECTED VERTEX COVER, SET PARTITIONING, and the pseudo-polynomial time algorithm for SUBSET SUM cannot be significantly improved. Finally, we justify our assumption about the hardness of SET COVER by showing that the parity of the number of solutions to SET COVER cannot be computed in time O(2(epsilon n)) for any epsilon < 1 unless SETH fails

    More Consequences of Falsifying SETH and the Orthogonal Vectors Conjecture

    Get PDF
    The Strong Exponential Time Hypothesis and the OV-conjecture are two popular hardness assumptions used to prove a plethora of lower bounds, especially in the realm of polynomial-time algorithms. The OV-conjecture in moderate dimension states there is no ϵ>0\epsilon>0 for which an O(N2ϵ)poly(D)O(N^{2-\epsilon})\mathrm{poly}(D) time algorithm can decide whether there is a pair of orthogonal vectors in a given set of size NN that contains DD-dimensional binary vectors. We strengthen the evidence for these hardness assumptions. In particular, we show that if the OV-conjecture fails, then two problems for which we are far from obtaining even tiny improvements over exhaustive search would have surprisingly fast algorithms. If the OV conjecture is false, then there is a fixed ϵ>0\epsilon>0 such that: (1) For all dd and all large enough kk, there is a randomized algorithm that takes O(n(1ϵ)k)O(n^{(1-\epsilon)k}) time to solve the Zero-Weight-kk-Clique and Min-Weight-kk-Clique problems on dd-hypergraphs with nn vertices. As a consequence, the OV-conjecture is implied by the Weighted Clique conjecture. (2) For all cc, the satisfiability of sparse TC1 circuits on nn inputs (that is, circuits with cncn wires, depth clognc\log n, and negation, AND, OR, and threshold gates) can be computed in time O((2ϵ)n){O((2-\epsilon)^n)}
    corecore