252 research outputs found

    Hardness Amplification of Optimization Problems

    Get PDF
    In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products. We say that an optimization problem ? is direct product feasible if it is possible to efficiently aggregate any k instances of ? and form one large instance of ? such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem ?, our hardness amplification theorem may be informally stated as follows: If there is a distribution D over instances of ? of size n such that every randomized algorithm running in time t(n) fails to solve ? on 1/?(n) fraction of inputs sampled from D, then, assuming some relationships on ?(n) and t(n), there is a distribution D\u27 over instances of ? of size O(n??(n)) such that every randomized algorithm running in time t(n)/poly(?(n)) fails to solve ? on 99/100 fraction of inputs sampled from D\u27. As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium

    Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems

    Get PDF
    A frontier open problem in circuit complexity is to prove P^{NP} is not in SIZE[n^k] for all k; this is a necessary intermediate step towards NP is not in P_{/poly}. Previously, for several classes containing P^{NP}, including NP^{NP}, ZPP^{NP}, and S_2 P, such lower bounds have been proved via Karp-Lipton-style Theorems: to prove C is not in SIZE[n^k] for all k, we show that C subset P_{/poly} implies a "collapse" D = C for some larger class D, where we already know D is not in SIZE[n^k] for all k. It seems obvious that one could take a different approach to prove circuit lower bounds for P^{NP} that does not require proving any Karp-Lipton-style theorems along the way. We show this intuition is wrong: (weak) Karp-Lipton-style theorems for P^{NP} are equivalent to fixed-polynomial size circuit lower bounds for P^{NP}. That is, P^{NP} is not in SIZE[n^k] for all k if and only if (NP subset P_{/poly} implies PH subset i.o.- P^{NP}_{/n}). Next, we present new consequences of the assumption NP subset P_{/poly}, towards proving similar results for NP circuit lower bounds. We show that under the assumption, fixed-polynomial circuit lower bounds for NP, nondeterministic polynomial-time derandomizations, and various fixed-polynomial time simulations of NP are all equivalent. Applying this equivalence, we show that circuit lower bounds for NP imply better Karp-Lipton collapses. That is, if NP is not in SIZE[n^k] for all k, then for all C in {Parity-P, PP, PSPACE, EXP}, C subset P_{/poly} implies C subset i.o.-NP_{/n^epsilon} for all epsilon > 0. Note that unconditionally, the collapses are only to MA and not NP. We also explore consequences of circuit lower bounds for a sparse language in NP. Among other results, we show if a polynomially-sparse NP language does not have n^{1+epsilon}-size circuits, then MA subset i.o.-NP_{/O(log n)}, MA subset i.o.-P^{NP[O(log n)]}, and NEXP is not in SIZE[2^{o(m)}]. Finally, we observe connections between these results and the "hardness magnification" phenomena described in recent works

    On the Symmetries of and Equivalence Test for Design Polynomials

    Get PDF
    In a Nisan-Wigderson design polynomial (in short, a design polynomial), every pair of monomials share a few common variables. A useful example of such a polynomial, introduced in [Neeraj Kayal et al., 2014], is the following: NW_{d,k}({x}) = sum_{h in F_d[z], deg(h) <= k}{ prod_{i=0}^{d-1}{x_{i, h(i)}}}, where d is a prime, F_d is the finite field with d elements, and k << d. The degree of the gcd of every pair of monomials in NW_{d,k} is at most k. For concreteness, we fix k = ceil[sqrt{d}]. The family of polynomials NW := {NW_{d,k} : d is a prime} and close variants of it have been used as hard explicit polynomial families in several recent arithmetic circuit lower bound proofs. But, unlike the permanent, very little is known about the various structural and algorithmic/complexity aspects of NW beyond the fact that NW in VNP. Is NW_{d,k} characterized by its symmetries? Is it circuit-testable, i.e., given a circuit C can we check efficiently if C computes NW_{d,k}? What is the complexity of equivalence test for NW, i.e., given black-box access to a f in F[{x}], can we check efficiently if there exists an invertible linear transformation A such that f = NW_{d,k}(A * {x})? Characterization of polynomials by their symmetries plays a central role in the geometric complexity theory program. Here, we answer the first two questions and partially answer the third. We show that NW_{d,k} is characterized by its group of symmetries over C, but not over R. We also show that NW_{d,k} is characterized by circuit identities which implies that NW_{d,k} is circuit-testable in randomized polynomial time. As another application of this characterization, we obtain the "flip theorem" for NW. We give an efficient equivalence test for NW in the case where the transformation A is a block-diagonal permutation-scaling matrix. The design of this algorithm is facilitated by an almost complete understanding of the group of symmetries of NW_{d,k}: We show that if A is in the group of symmetries of NW_{d,k} then A = D * P, where D and P are diagonal and permutation matrices respectively. This is proved by completely characterizing the Lie algebra of NW_{d,k}, and using an interplay between the Hessian of NW_{d,k} and the evaluation dimension

    Finding Large Set Covers Faster via the Representation Method

    Get PDF
    The worst-case fastest known algorithm for the Set Cover problem on universes with n elements still essentially is the simple O^*(2^n)-time dynamic programming algorithm, and no non-trivial consequences of an O^*(1.01^n)-time algorithm are known. Motivated by this chasm, we study the following natural question: Which instances of Set Cover can we solve faster than the simple dynamic programming algorithm? Specifically, we give a Monte Carlo algorithm that determines the existence of a set cover of size sigma*n in O^*(2^{(1-Omega(sigma^4))n}) time. Our approach is also applicable to Set Cover instances with exponentially many sets: By reducing the task of finding the chromatic number chi(G) of a given n-vertex graph G to Set Cover in the natural way, we show there is an O^*(2^{(1-Omega(sigma^4))n})-time randomized algorithm that given integer s = sigma*n, outputs NO if chi(G) > s and YES with constant probability if chi(G) <= s - 1. On a high level, our results are inspired by the "representation method" of Howgrave-Graham and Joux~[EUROCRYPT\u2710] and obtained by only evaluating a randomly sampled subset of the table entries of a dynamic programming algorithm

    Descriptive Complexity of #AC^0 Functions

    Get PDF
    We introduce a new framework for a descriptive complexity approach to arithmetic computations. We define a hierarchy of classes based on the idea of counting assignments to free function variables in first-order formulae. We completely determine the inclusion structure and show that #P and #AC^0 appear as classes of this hierarchy. In this way, we unconditionally place #AC^0 properly in a strict hierarchy of arithmetic classes within #P. We compare our classes with a hierarchy within #P defined in a model-theoretic way by Saluja et al. We argue that our approach is better suited to study arithmetic circuit classes such as #AC^0 which can be descriptively characterized as a class in our framework

    Counting Solutions to Polynomial Systems via Reductions

    Get PDF
    This paper provides both positive and negative results for counting solutions to systems of polynomial equations over a finite field. The general idea is to try to reduce the problem to counting solutions to a single polynomial, where the task is easier. In both cases, simple methods are utilized that we expect will have wider applicability (far beyond algebra). First, we give an efficient deterministic reduction from approximate counting for a system of (arbitrary) polynomial equations to approximate counting for one equation, over any finite field. We apply this reduction to give a deterministic poly(n,s,log p)/eps^2 time algorithm for approximately counting the fraction of solutions to a system of s quadratic n-variate polynomials over F_p (the finite field of prime order p) to within an additive eps factor, for any prime p. Note that uniform random sampling would already require Omega(s/eps^2) time, so our algorithm behaves as a full derandomization of uniform sampling. The approximate-counting algorithm yields efficient approximate counting for other well-known problems, such as 2-SAT, NAE-3SAT, and 3-Coloring. As a corollary, there is a deterministic algorithm (with analogous running time) for producing solutions to such systems which have at least eps p^n solutions. Second, we consider the difficulty of exactly counting solutions to a single polynomial of constant degree, over a finite field. (Note that finding a solution in this case is easy.) It has been known for over 20 years that this counting problem is already NP-hard for degree-three polynomials over F_2; however, all known reductions increased the number of variables by a considerable amount. We give a subexponential-time reduction from counting solutions to k-CNF formulas to counting solutions to a degree-k^{O(k)} polynomial (over any finite field of O(1) order) which exactly preserves the number of variables. As a corollary, the Strong Exponential Time Hypothesis (even its weak counting variant #SETH) implies that counting solutions to constant-degree polynomials (even over F_2) requires essentially 2^n time. Similar results hold for counting orthogonal pairs of vectors over F_p

    Grid Recognition: Classical and Parameterized Computational Perspectives

    Get PDF
    Grid graphs, and, more generally, k×rk\times r grid graphs, form one of the most basic classes of geometric graphs. Over the past few decades, a large body of works studied the (in)tractability of various computational problems on grid graphs, which often yield substantially faster algorithms than general graphs. Unfortunately, the recognition of a grid graph is particularly hard -- it was shown to be NP-hard even on trees of pathwidth 3 already in 1987. Yet, in this paper, we provide several positive results in this regard in the framework of parameterized complexity (additionally, we present new and complementary hardness results). Specifically, our contribution is threefold. First, we show that the problem is fixed-parameter tractable (FPT) parameterized by k+mcck+\mathsf {mcc} where mcc\mathsf{mcc} is the maximum size of a connected component of GG. This also implies that the problem is FPT parameterized by td+k\mathtt{td}+k where td\mathtt{td} is the treedepth of GG (to be compared with the hardness for pathwidth 2 where k=3k=3). Further, we derive as a corollary that strip packing is FPT with respect to the height of the strip plus the maximum of the dimensions of the packed rectangles, which was previously only known to be in XP. Second, we present a new parameterization, denoted aGa_G, relating graph distance to geometric distance, which may be of independent interest. We show that the problem is para-NP-hard parameterized by aGa_G, but FPT parameterized by aGa_G on trees, as well as FPT parameterized by k+aGk+a_G. Third, we show that the recognition of k×rk\times r grid graphs is NP-hard on graphs of pathwidth 2 where k=3k=3. Moreover, when kk and rr are unrestricted, we show that the problem is NP-hard on trees of pathwidth 2, but trivially solvable in polynomial time on graphs of pathwidth 1
    • …
    corecore