3,921 research outputs found

    On Approximability, Convergence, and Limits of CSP Problems

    Get PDF
    This thesis studies dense constraint satisfaction problems (CSPs), and other related optimization and decision problems that can be phrased as questions regarding parameters or properties of combinatorial objects such as uniform hypergraphs. We concentrate on the information that can be derived from a very small substructure that is selected uniformly at random. In this thesis, we present a unified framework on the limits of CSPs in the sense of the convergence notion of Lovasz-Szegedy that depends only on the remarkable connection between graph sequences and exchangeable arrays established by Diaconis-Janson. In particular, we formulate and prove a representation theorem for compact colored r-uniform directed hypergraphs and apply this to rCSPs. We investigate the sample complexity of testable r-graph parameters, and discuss a generalized version of ground state energies (GSE) and demonstrate that they are efficiently testable. The GSE is a term borrowed from statistical physics that stands for a generalized version of maximal multiway cut problems from complexity theory, and was studied in the dense graph setting by Borgs et al. A notion related to testing CSPs that are defined on graphs, the nondeterministic property testing, was introduced by Lovasz-Vesztergombi, which extends the graph property testing framework of Goldreich-Goldwasser-Ron in the dense graph model. In this thesis, we study the sample complexity of nondeterministically testable graph parameters and properties and improve existing bounds by several orders of magnitude. Further, we prove the equivalence of the notions of nondeterministic and deterministic parameter and property testing for uniform dense hypergraphs of arbitrary rank, and provide the first effective upper bound on the sample complexity in this general case

    On the Complexity of Nondeterministically Testable Hypergraph Parameters

    Get PDF
    The paper proves the equivalence of the notions of nondeterministic and deterministic parameter testing for uniform dense hypergraphs of arbitrary order. It generalizes the result previously known only for the case of simple graphs. By a similar method we establish also the equivalence between nondeterministic and deterministic hypergraph property testing, answering the open problem in the area. We introduce a new notion of a cut norm for hypergraphs of higher order, and employ regularity techniques combined with the ultralimit method.Comment: 33 page

    FPT is Characterized by Useful Obstruction Sets

    Full text link
    Many graph problems were first shown to be fixed-parameter tractable using the results of Robertson and Seymour on graph minors. We show that the combination of finite, computable, obstruction sets and efficient order tests is not just one way of obtaining strongly uniform FPT algorithms, but that all of FPT may be captured in this way. Our new characterization of FPT has a strong connection to the theory of kernelization, as we prove that problems with polynomial kernels can be characterized by obstruction sets whose elements have polynomial size. Consequently we investigate the interplay between the sizes of problem kernels and the sizes of the elements of such obstruction sets, obtaining several examples of how results in one area yield new insights in the other. We show how exponential-size minor-minimal obstructions for pathwidth k form the crucial ingredient in a novel OR-cross-composition for k-Pathwidth, complementing the trivial AND-composition that is known for this problem. In the other direction, we show that OR-cross-compositions into a parameterized problem can be used to rule out the existence of efficiently generated quasi-orders on its instances that characterize the NO-instances by polynomial-size obstructions.Comment: Extended abstract with appendix, as accepted to WG 201

    Towards a complexity theory for the congested clique

    Full text link
    The congested clique model of distributed computing has been receiving attention as a model for densely connected distributed systems. While there has been significant progress on the side of upper bounds, we have very little in terms of lower bounds for the congested clique; indeed, it is now know that proving explicit congested clique lower bounds is as difficult as proving circuit lower bounds. In this work, we use various more traditional complexity-theoretic tools to build a clearer picture of the complexity landscape of the congested clique: -- Nondeterminism and beyond: We introduce the nondeterministic congested clique model (analogous to NP) and show that there is a natural canonical problem family that captures all problems solvable in constant time with nondeterministic algorithms. We further generalise these notions by introducing the constant-round decision hierarchy (analogous to the polynomial hierarchy). -- Non-constructive lower bounds: We lift the prior non-uniform counting arguments to a general technique for proving non-constructive uniform lower bounds for the congested clique. In particular, we prove a time hierarchy theorem for the congested clique, showing that there are decision problems of essentially all complexities, both in the deterministic and nondeterministic settings. -- Fine-grained complexity: We map out relationships between various natural problems in the congested clique model, arguing that a reduction-based complexity theory currently gives us a fairly good picture of the complexity landscape of the congested clique

    Strong ETH Breaks With Merlin and Arthur: Short Non-Interactive Proofs of Batch Evaluation

    Get PDF
    We present an efficient proof system for Multipoint Arithmetic Circuit Evaluation: for every arithmetic circuit C(x1,,xn)C(x_1,\ldots,x_n) of size ss and degree dd over a field F{\mathbb F}, and any inputs a1,,aKFna_1,\ldots,a_K \in {\mathbb F}^n, \bullet the Prover sends the Verifier the values C(a1),,C(aK)FC(a_1), \ldots, C(a_K) \in {\mathbb F} and a proof of O~(Kd)\tilde{O}(K \cdot d) length, and \bullet the Verifier tosses poly(log(dKF/ε))\textrm{poly}(\log(dK|{\mathbb F}|/\varepsilon)) coins and can check the proof in about O~(K(n+d)+s)\tilde{O}(K \cdot(n + d) + s) time, with probability of error less than ε\varepsilon. For small degree dd, this "Merlin-Arthur" proof system (a.k.a. MA-proof system) runs in nearly-linear time, and has many applications. For example, we obtain MA-proof systems that run in cnc^{n} time (for various c<2c < 2) for the Permanent, #\#Circuit-SAT for all sublinear-depth circuits, counting Hamiltonian cycles, and infeasibility of 00-11 linear programs. In general, the value of any polynomial in Valiant's class VP{\sf VP} can be certified faster than "exhaustive summation" over all possible assignments. These results strongly refute a Merlin-Arthur Strong ETH and Arthur-Merlin Strong ETH posed by Russell Impagliazzo and others. We also give a three-round (AMA) proof system for quantified Boolean formulas running in 22n/3+o(n)2^{2n/3+o(n)} time, nearly-linear time MA-proof systems for counting orthogonal vectors in a collection and finding Closest Pairs in the Hamming metric, and a MA-proof system running in nk/2+O(1)n^{k/2+O(1)}-time for counting kk-cliques in graphs. We point to some potential future directions for refuting the Nondeterministic Strong ETH.Comment: 17 page

    Two-Way Automata Making Choices Only at the Endmarkers

    Full text link
    The question of the state-size cost for simulation of two-way nondeterministic automata (2NFAs) by two-way deterministic automata (2DFAs) was raised in 1978 and, despite many attempts, it is still open. Subsequently, the problem was attacked by restricting the power of 2DFAs (e.g., using a restricted input head movement) to the degree for which it was already possible to derive some exponential gaps between the weaker model and the standard 2NFAs. Here we use an opposite approach, increasing the power of 2DFAs to the degree for which it is still possible to obtain a subexponential conversion from the stronger model to the standard 2DFAs. In particular, it turns out that subexponential conversion is possible for two-way automata that make nondeterministic choices only when the input head scans one of the input tape endmarkers. However, there is no restriction on the input head movement. This implies that an exponential gap between 2NFAs and 2DFAs can be obtained only for unrestricted 2NFAs using capabilities beyond the proposed new model. As an additional bonus, conversion into a machine for the complement of the original language is polynomial in this model. The same holds for making such machines self-verifying, halting, or unambiguous. Finally, any superpolynomial lower bound for the simulation of such machines by standard 2DFAs would imply LNL. In the same way, the alternating version of these machines is related to L =? NL =? P, the classical computational complexity problems.Comment: 23 page
    corecore