200 research outputs found

    The PCP Theorem for NP Over the Reals

    Full text link

    The Complexity of Rationalizing Network Formation

    Get PDF
    We study the complexity of rationalizing network formation. In this problem we fix an underlying model describing how selfish parties (the vertices) produce a graph by making individual decisions to form or not form incident edges. The model is equipped with a notion of stability (or equilibrium), and we observe a set of "snapshots" of graphs that are assumed to be stable. From this we would like to infer some unobserved data about the system: edge prices, or how much each vertex values short paths to each other vertex. We study two rationalization problems arising from the network formation model of Jackson and Wolinsky [14]. When the goal is to infer edge prices, we observe that the rationalization problem is easy. The problem remains easy even when rationalizing prices do not exist and we instead wish to find prices that maximize the stability of the system. In contrast, when the edge prices are given and the goal is instead to infer valuations of each vertex by each other vertex, we prove that the rationalization problem becomes NP-hard. Our proof exposes a close connection between rationalization problems and the Inequality-SAT (I-SAT) problem. Finally and most significantly, we prove that an approximation version of this NP-complete rationalization problem is NP-hard to approximate to within better than a 1/2 ratio. This shows that the trivial algorithm of setting everyone's valuations to infinity (which rationalizes all the edges present in the input graphs) or to zero (which rationalizes all the non-edges present in the input graphs) is the best possible assuming P ≠ NP To do this we prove a tight (1/2 + δ) -approximation hardness for a variant of I-SAT in which all coefficients are non-negative. This in turn follows from a tight hardness result for MAX-LlN_(R_+) (linear equations over the reals, with non-negative coefficients), which we prove by a (non-trivial) modification of the recent result of Guruswami and Raghavendra [10] which achieved tight hardness for this problem without the non-negativity constraint. Our technical contributions regarding the hardness of I-SAT and MAX-LIN_(R_+) may be of independent interest, given the generality of these problem

    NP-Hardness of Approximately Solving Linear Equations Over Reals

    Get PDF
    URL lists article on conference siteIn this paper, we consider the problem of approximately solving a system of homogeneous linear equations over reals, where each equation contains at most three variables. Since the all-zero assignment always satisfies all the equations exactly, we restrict the assignments to be “non-trivial”. Here is an informal statement of our result: it is NP-hard to distinguish whether there is a non-trivial assignment that satisfies 1δ1-\delta fraction of the equations or every non-trivial assignment fails to satisfy a constant fraction of the equations with a ``margin" of Ω(δ)\Omega(\sqrt{\delta}). We develop linearity and dictatorship testing procedures for functions f : Rn 7--> R over a Gaussian space, which could be of independent interest. We believe that studying the complexity of linear equations over reals, apart from being a natural pursuit, can lead to progress on the Unique Games Conjecture.National Science Foundation (U.S.) (NSF CAREER grant CCF-0833228)National Science Foundation (U.S.) (Expeditions grant CCF-0832795)U.S.-Israel Binational Science Foundation (BSF grant 2008059

    Satisfiability of cross product terms is complete for real nondeterministic polytime Blum-Shub-Smale machines

    Full text link
    Nondeterministic polynomial-time Blum-Shub-Smale Machines over the reals give rise to a discrete complexity class between NP and PSPACE. Several problems, mostly from real algebraic geometry / polynomial systems, have been shown complete (under many-one reduction by polynomial-time Turing machines) for this class. We exhibit a new one based on questions about expressions built from cross products only.Comment: In Proceedings MCU 2013, arXiv:1309.104

    The Power of Unentanglement

    Get PDF
    The class QMA(k). introduced by Kobayashi et al., consists of all languages that can be verified using k unentangled quantum proofs. Many of the simplest questions about this class have remained embarrassingly open: for example, can we give any evidence that k quantum proofs are more powerful than one? Does QMA(k) = QMA(2) for k ≥ 2? Can QMA(k) protocols be amplified to exponentially small error? In this paper, we make progress on all of the above questions. * We give a protocol by which a verifier can be convinced that a 3SAT formula of size m is satisfiable, with constant soundness, given Õ (√m) unentangled quantum witnesses with O(log m) qubits each. Our protocol relies on the existence of very short PCPs. * We show that assuming a weak version of the Additivity Conjecture from quantum information theory, any QMA(2) protocol can be amplified to exponentially small error, and QMA(k) = QMA(2) for all k ≥ 2. * We prove the nonexistence of "perfect disentanglers" for simulating multiple Merlins with one

    The Quantum PCP Conjecture

    Full text link
    The classical PCP theorem is arguably the most important achievement of classical complexity theory in the past quarter century. In recent years, researchers in quantum computational complexity have tried to identify approaches and develop tools that address the question: does a quantum version of the PCP theorem hold? The story of this study starts with classical complexity and takes unexpected turns providing fascinating vistas on the foundations of quantum mechanics, the global nature of entanglement and its topological properties, quantum error correction, information theory, and much more; it raises questions that touch upon some of the most fundamental issues at the heart of our understanding of quantum mechanics. At this point, the jury is still out as to whether or not such a theorem holds. This survey aims to provide a snapshot of the status in this ongoing story, tailored to a general theory-of-CS audience.Comment: 45 pages, 4 figures, an enhanced version of the SIGACT guest column from Volume 44 Issue 2, June 201

    Low-degree tests at large distances

    Full text link
    We define tests of boolean functions which distinguish between linear (or quadratic) polynomials, and functions which are very far, in an appropriate sense, from these polynomials. The tests have optimal or nearly optimal trade-offs between soundness and the number of queries. In particular, we show that functions with small Gowers uniformity norms behave ``randomly'' with respect to hypergraph linearity tests. A central step in our analysis of quadraticity tests is the proof of an inverse theorem for the third Gowers uniformity norm of boolean functions. The last result has also a coding theory application. It is possible to estimate efficiently the distance from the second-order Reed-Muller code on inputs lying far beyond its list-decoding radius

    Independent Set, Induced Matching, and Pricing: Connections and Tight (Subexponential Time) Approximation Hardnesses

    Full text link
    We present a series of almost settled inapproximability results for three fundamental problems. The first in our series is the subexponential-time inapproximability of the maximum independent set problem, a question studied in the area of parameterized complexity. The second is the hardness of approximating the maximum induced matching problem on bounded-degree bipartite graphs. The last in our series is the tight hardness of approximating the k-hypergraph pricing problem, a fundamental problem arising from the area of algorithmic game theory. In particular, assuming the Exponential Time Hypothesis, our two main results are: - For any r larger than some constant, any r-approximation algorithm for the maximum independent set problem must run in at least 2^{n^{1-\epsilon}/r^{1+\epsilon}} time. This nearly matches the upper bound of 2^{n/r} (Cygan et al., 2008). It also improves some hardness results in the domain of parameterized complexity (e.g., Escoffier et al., 2012 and Chitnis et al., 2013) - For any k larger than some constant, there is no polynomial time min (k^{1-\epsilon}, n^{1/2-\epsilon})-approximation algorithm for the k-hypergraph pricing problem, where n is the number of vertices in an input graph. This almost matches the upper bound of min (O(k), \tilde O(\sqrt{n})) (by Balcan and Blum, 2007 and an algorithm in this paper). We note an interesting fact that, in contrast to n^{1/2-\epsilon} hardness for polynomial-time algorithms, the k-hypergraph pricing problem admits n^{\delta} approximation for any \delta >0 in quasi-polynomial time. This puts this problem in a rare approximability class in which approximability thresholds can be improved significantly by allowing algorithms to run in quasi-polynomial time.Comment: The full version of FOCS 201

    Integer Vector Addition Systems with States

    Full text link
    This paper studies reachability, coverability and inclusion problems for Integer Vector Addition Systems with States (ZVASS) and extensions and restrictions thereof. A ZVASS comprises a finite-state controller with a finite number of counters ranging over the integers. Although it is folklore that reachability in ZVASS is NP-complete, it turns out that despite their naturalness, from a complexity point of view this class has received little attention in the literature. We fill this gap by providing an in-depth analysis of the computational complexity of the aforementioned decision problems. Most interestingly, it turns out that while the addition of reset operations to ordinary VASS leads to undecidability and Ackermann-hardness of reachability and coverability, respectively, they can be added to ZVASS while retaining NP-completness of both coverability and reachability.Comment: 17 pages, 2 figure
    corecore