104 research outputs found

    Probabilistic existence of regular combinatorial structures

    Full text link
    We show the existence of regular combinatorial objects which previously were not known to exist. Specifically, for a wide range of the underlying parameters, we show the existence of non-trivial orthogonal arrays, t-designs, and t-wise permutations. In all cases, the sizes of the objects are optimal up to polynomial overhead. The proof of existence is probabilistic. We show that a randomly chosen structure has the required properties with positive yet tiny probability. Our method allows also to give rather precise estimates on the number of objects of a given size and this is applied to count the number of orthogonal arrays, t-designs and regular hypergraphs. The main technical ingredient is a special local central limit theorem for suitable lattice random walks with finitely many steps.Comment: An extended abstract of this work [arXiv:1111.0492] appeared in STOC 2012. This version expands the literature discussio

    Algebraic and Combinatorial Methods in Computational Complexity

    Get PDF
    At its core, much of Computational Complexity is concerned with combinatorial objects and structures. But it has often proven true that the best way to prove things about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The PCP characterization of NP and the Agrawal-Kayal-Saxena polynomial-time primality test are two prominent examples. Recently, there have been some works going in the opposite direction, giving alternative combinatorial proofs for results that were originally proved algebraically. These alternative proofs can yield important improvements because they are closer to the underlying problems and avoid the losses in passing to the algebraic setting. A prominent example is Dinur's proof of the PCP Theorem via gap amplification which yielded short PCPs with only a polylogarithmic length blowup (which had been the focus of significant research effort up to that point). We see here (and in a number of recent works) an exciting interplay between algebraic and combinatorial techniques. This seminar aims to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic and combinatorial methods in a variety of settings

    Some Applications of Coding Theory in Computational Complexity

    Full text link
    Error-correcting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locally-testable and locally-decodable error-correcting codes, and their applications to complexity theory and to cryptography. Locally decodable codes are error-correcting codes with sub-linear time error-correcting algorithms. They are related to private information retrieval (a type of cryptographic protocol), and they are used in average-case complexity and to construct ``hard-core predicates'' for one-way permutations. Locally testable codes are error-correcting codes with sub-linear time error-detection algorithms, and they are the combinatorial core of probabilistically checkable proofs

    Quasirandomness in quantum information theory

    Get PDF

    Quasirandomness in quantum information theory

    Get PDF

    On the Vocabulary of Grammar-Based Codes and the Logical Consistency of Texts

    Get PDF
    The article presents a new interpretation for Zipf's law in natural language which relies on two areas of information theory. We reformulate the problem of grammar-based compression and investigate properties of strongly nonergodic stationary processes. The motivation for the joint discussion is to prove a proposition with a simple informal statement: If an nn-letter long text describes nβn^\beta independent facts in a random but consistent way then the text contains at least nβ/lognn^\beta/\log n different words. In the formal statement, two specific postulates are adopted. Firstly, the words are understood as the nonterminal symbols of the shortest grammar-based encoding of the text. Secondly, the texts are assumed to be emitted by a nonergodic source, with the described facts being binary IID variables that are asymptotically predictable in a shift-invariant way. The proof of the formal proposition applies several new tools. These are: a construction of universal grammar-based codes for which the differences of code lengths can be bounded easily, ergodic decomposition theorems for mutual information between the past and future of a stationary process, and a lemma that bounds differences of a sublinear function. The linguistic relevance of presented modeling assumptions, theorems, definitions, and examples is discussed in parallel.While searching for concrete processes to which our proposition can be applied, we introduce several instances of strongly nonergodic processes. In particular, we define the subclass of accessible description processes, which formalizes the notion of texts that describe facts in a self-contained way

    Non-acyclicity of coset lattices and generation of finite groups

    Get PDF
    corecore