104 research outputs found
Probabilistic existence of regular combinatorial structures
We show the existence of regular combinatorial objects which previously were
not known to exist. Specifically, for a wide range of the underlying
parameters, we show the existence of non-trivial orthogonal arrays, t-designs,
and t-wise permutations. In all cases, the sizes of the objects are optimal up
to polynomial overhead. The proof of existence is probabilistic. We show that a
randomly chosen structure has the required properties with positive yet tiny
probability. Our method allows also to give rather precise estimates on the
number of objects of a given size and this is applied to count the number of
orthogonal arrays, t-designs and regular hypergraphs. The main technical
ingredient is a special local central limit theorem for suitable lattice random
walks with finitely many steps.Comment: An extended abstract of this work [arXiv:1111.0492] appeared in STOC
2012. This version expands the literature discussio
Algebraic and Combinatorial Methods in Computational Complexity
At its core, much of Computational Complexity is concerned with combinatorial objects and structures. But it has often proven true that the best way to prove things about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The PCP characterization of NP and the Agrawal-Kayal-Saxena polynomial-time primality test are two prominent examples. Recently, there have been some works going in the opposite direction, giving alternative combinatorial proofs for results that were originally proved algebraically. These alternative proofs can yield important improvements because they are closer to the underlying problems and avoid the losses in passing to the algebraic setting. A prominent example is Dinur's proof of the PCP Theorem via gap amplification which yielded short PCPs with only a polylogarithmic length blowup (which had been the focus of significant research effort up to that point). We see here (and in a number of recent works) an exciting interplay between algebraic and combinatorial techniques. This seminar aims to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic and combinatorial methods in a variety of settings
Some Applications of Coding Theory in Computational Complexity
Error-correcting codes and related combinatorial constructs play an important
role in several recent (and old) results in computational complexity theory. In
this paper we survey results on locally-testable and locally-decodable
error-correcting codes, and their applications to complexity theory and to
cryptography.
Locally decodable codes are error-correcting codes with sub-linear time
error-correcting algorithms. They are related to private information retrieval
(a type of cryptographic protocol), and they are used in average-case
complexity and to construct ``hard-core predicates'' for one-way permutations.
Locally testable codes are error-correcting codes with sub-linear time
error-detection algorithms, and they are the combinatorial core of
probabilistically checkable proofs
Recommended from our members
Complexity Theory
Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness, and quantum computation. Many of the developements are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, quantum mechanics, representation theory, and the theory of error-correcting codes
On the Vocabulary of Grammar-Based Codes and the Logical Consistency of Texts
The article presents a new interpretation for Zipf's law in
natural language which relies on two areas of information
theory. We reformulate the problem of grammar-based compression
and investigate properties of strongly nonergodic stationary
processes. The motivation for the joint discussion is to prove a
proposition with a simple informal statement: If an -letter
long text describes independent facts in a random but
consistent way then the text contains at least
different words.
In the formal statement, two specific postulates are
adopted. Firstly, the words are understood as the nonterminal
symbols of the shortest grammar-based encoding of the
text. Secondly, the texts are assumed to be emitted by a
nonergodic source, with the described facts being binary IID
variables that are asymptotically predictable in a
shift-invariant way.
The proof of the formal proposition applies several new tools.
These are: a construction of universal grammar-based codes for
which the differences of code lengths can be bounded easily,
ergodic decomposition theorems for mutual information between the
past and future of a stationary process, and a lemma that bounds
differences of a sublinear function.
The linguistic relevance of presented modeling assumptions,
theorems, definitions, and examples is discussed in
parallel.While searching for concrete processes to which our
proposition can be applied, we introduce several instances of
strongly nonergodic processes. In particular, we define the
subclass of accessible description processes, which formalizes
the notion of texts that describe facts in a self-contained way
- …