539 research outputs found

    Optimal Polynomial-Time Compression for Boolean Max CSP

    Get PDF
    In the Boolean maximum constraint satisfaction problem - Max CSP(?) - one is given a collection of weighted applications of constraints from a finite constraint language ?, over a common set of variables, and the goal is to assign Boolean values to the variables so that the total weight of satisfied constraints is maximized. There exists a concise dichotomy theorem providing a criterion on ? for the problem to be polynomial-time solvable and stating that otherwise it becomes NP-hard. We study the NP-hard cases through the lens of kernelization and provide a complete characterization of Max CSP(?) with respect to the optimal compression size. Namely, we prove that Max CSP(?) parameterized by the number of variables n is either polynomial-time solvable, or there exists an integer d ? 2 depending on ?, such that: 1) An instance of Max CSP(?) can be compressed into an equivalent instance with ?(n^d log n) bits in polynomial time, 2) Max CSP(?) does not admit such a compression to ?(n^{d-?}) bits unless NP ? co-NP / poly. Our reductions are based on interpreting constraints as multilinear polynomials combined with the framework of constraint implementations. As another application of our reductions, we reveal tight connections between optimal running times for solving Max CSP(?). More precisely, we show that obtaining a running time of the form ?(2^{(1-?)n}) for particular classes of Max CSPs is as hard as breaching this barrier for Max d-SAT for some d

    Optimal Sparsification for Some Binary CSPs Using Low-degree Polynomials

    Full text link
    This paper analyzes to what extent it is possible to efficiently reduce the number of clauses in NP-hard satisfiability problems, without changing the answer. Upper and lower bounds are established using the concept of kernelization. Existing results show that if NP is not contained in coNP/poly, no efficient preprocessing algorithm can reduce n-variable instances of CNF-SAT with d literals per clause, to equivalent instances with O(nd−e)O(n^{d-e}) bits for any e > 0. For the Not-All-Equal SAT problem, a compression to size O˜(nd−1)\~O(n^{d-1}) exists. We put these results in a common framework by analyzing the compressibility of binary CSPs. We characterize constraint types based on the minimum degree of multivariate polynomials whose roots correspond to the satisfying assignments, obtaining (nearly) matching upper and lower bounds in several settings. Our lower bounds show that not just the number of constraints, but also the encoding size of individual constraints plays an important role. For example, for Exact Satisfiability with unbounded clause length it is possible to efficiently reduce the number of constraints to n+1, yet no polynomial-time algorithm can reduce to an equivalent instance with O(n2−e)O(n^{2-e}) bits for any e > 0, unless NP is a subset of coNP/poly.Comment: Updated the cross-composition in lemma 18 (minor update), since the previous version did NOT satisfy requirement 4 of lemma 18 (the proof of Claim 20 was incorrect

    Optimal Sparsification for Some Binary CSPs Using Low-Degree Polynomials

    Get PDF
    This paper analyzes to what extent it is possible to efficiently reduce the number of clauses in NP-hard satisfiability problems, without changing the answer. Upper and lower bounds are established using the concept of kernelization. Existing results show that if NP is not contained in coNP/poly, no efficient preprocessing algorithm can reduce n-variable instances of CNF-SAT with d literals per clause, to equivalent instances with O(n^{d-epsilon}) bits for any epsilon > 0. For the Not-All-Equal SAT problem, a compression to size tilde-O(n^{d-1}) exists. We put these results in a common framework by analyzing the compressibility of binary CSPs. We characterize constraint types based on the minimum degree of multivariate polynomials whose roots correspond to the satisfying assignments, obtaining (nearly) matching upper and lower bounds in several settings. Our lower bounds show that not just the number of constraints, but also the encoding size of individual constraints plays an important role. For example, for Exact Satisfiability with unbounded clause length it is possible to efficiently reduce the number of constraints to n+1, yet no polynomial-time algorithm can reduce to an equivalent instance with O(n^{2-epsilon}) bits for any epsilon > 0, unless NP is contained in coNP/poly

    On the Hardness of Compressing Weights

    Get PDF
    We investigate computational problems involving large weights through the lens of kernelization, which is a framework of polynomial-time preprocessing aimed at compressing the instance size. Our main focus is the weighted Clique problem, where we are given an edge-weighted graph and the goal is to detect a clique of total weight equal to a prescribed value. We show that the weighted variant, parameterized by the number of vertices nn, is significantly harder than the unweighted problem by presenting an O(n3−ε)O(n^{3 - \varepsilon}) lower bound on the size of the kernel, under the assumption that NP ⊈\not \subseteq coNP/poly. This lower bound is essentially tight: we show that we can reduce the problem to the case with weights bounded by 2O(n)2^{O(n)}, which yields a randomized kernel of O(n3)O(n^3) bits. We generalize these results to the weighted dd-Uniform Hyperclique problem, Subset Sum, and weighted variants of Boolean Constraint Satisfaction Problems (CSPs). We also study weighted minimization problems and show that weight compression is easier when we only want to preserve the collection of optimal solutions. Namely, we show that for node-weighted Vertex Cover on bipartite graphs it is possible to maintain the set of optimal solutions using integer weights from the range [1,n][1, n], but if we want to maintain the ordering of the weights of all inclusion-minimal solutions, then weights as large as 2Ω(n)2^{\Omega(n)} are necessary.Comment: To appear at MFCS'2

    Linear-Time FPT Algorithms via Network Flow

    Full text link
    In the area of parameterized complexity, to cope with NP-Hard problems, we introduce a parameter k besides the input size n, and we aim to design algorithms (called FPT algorithms) that run in O(f(k)n^d) time for some function f(k) and constant d. Though FPT algorithms have been successfully designed for many problems, typically they are not sufficiently fast because of huge f(k) and d. In this paper, we give FPT algorithms with small f(k) and d for many important problems including Odd Cycle Transversal and Almost 2-SAT. More specifically, we can choose f(k) as a single exponential (4^k) and d as one, that is, linear in the input size. To the best of our knowledge, our algorithms achieve linear time complexity for the first time for these problems. To obtain our algorithms for these problems, we consider a large class of integer programs, called BIP2. Then we show that, in linear time, we can reduce BIP2 to Vertex Cover Above LP preserving the parameter k, and we can compute an optimal LP solution for Vertex Cover Above LP using network flow. Then, we perform an exhaustive search by fixing half-integral values in the optimal LP solution for Vertex Cover Above LP. A bottleneck here is that we need to recompute an LP optimal solution after branching. To address this issue, we exploit network flow to update the optimal LP solution in linear time.Comment: 20 page

    Fast counting with tensor networks

    Full text link
    We introduce tensor network contraction algorithms for counting satisfying assignments of constraint satisfaction problems (#CSPs). We represent each arbitrary #CSP formula as a tensor network, whose full contraction yields the number of satisfying assignments of that formula, and use graph theoretical methods to determine favorable orders of contraction. We employ our heuristics for the solution of #P-hard counting boolean satisfiability (#SAT) problems, namely monotone #1-in-3SAT and #Cubic-Vertex-Cover, and find that they outperform state-of-the-art solvers by a significant margin.Comment: v2: added results for monotone #1-in-3SAT; published versio

    Guarantees and Limits of Preprocessing in Constraint Satisfaction and Reasoning

    Full text link
    We present a first theoretical analysis of the power of polynomial-time preprocessing for important combinatorial problems from various areas in AI. We consider problems from Constraint Satisfaction, Global Constraints, Satisfiability, Nonmonotonic and Bayesian Reasoning under structural restrictions. All these problems involve two tasks: (i) identifying the structure in the input as required by the restriction, and (ii) using the identified structure to solve the reasoning task efficiently. We show that for most of the considered problems, task (i) admits a polynomial-time preprocessing to a problem kernel whose size is polynomial in a structural problem parameter of the input, in contrast to task (ii) which does not admit such a reduction to a problem kernel of polynomial size, subject to a complexity theoretic assumption. As a notable exception we show that the consistency problem for the AtMost-NValue constraint admits a polynomial kernel consisting of a quadratic number of variables and domain values. Our results provide a firm worst-case guarantees and theoretical boundaries for the performance of polynomial-time preprocessing algorithms for the considered problems.Comment: arXiv admin note: substantial text overlap with arXiv:1104.2541, arXiv:1104.556

    Tight parameterized preprocessing bounds:sparsification via low-degree polynomials

    Get PDF
    • …
    corecore