700 research outputs found
Complexity Theory
Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness, and quantum computation. Many of the developements are related to diverse mathematical ļ¬elds such as algebraic geometry, combinatorial number theory, probability theory, quantum mechanics, representation theory, and the theory of error-correcting codes
Derandomization with Minimal Memory Footprint
Existing proofs that deduce BPL = ? from circuit lower bounds convert randomized algorithms into deterministic algorithms with large constant overhead in space. We study space-bounded derandomization with minimal footprint, and ask what is the minimal possible space overhead for derandomization. We show that BPSPACE[S] ? DSPACE[c ? S] for c ? 2, assuming space-efficient cryptographic PRGs, and, either: (1) lower bounds against bounded-space algorithms with advice, or: (2) lower bounds against certain uniform compression algorithms. Under additional assumptions regarding the power of catalytic computation, in a new setting of parameters that was not studied before, we are even able to get c ? 1.
Our results are constructive: Given a candidate hard function (and a candidate cryptographic PRG) we show how to transform the randomized algorithm into an efficient deterministic one. This follows from new PRGs and targeted PRGs for space-bounded algorithms, which we combine with novel space-efficient evaluation methods. A central ingredient in all our constructions is hardness amplification reductions in logspace-uniform TC?, that were not known before
List Decoding with Double Samplers
We develop the notion of "double samplers", first introduced by Dinur and
Kaufman [Proc. 58th FOCS, 2017], which are samplers with additional
combinatorial properties, and whose existence we prove using high dimensional
expanders.
We show how double samplers give a generic way of amplifying distance in a
way that enables efficient list-decoding. There are many error correcting code
constructions that achieve large distance by starting with a base code with
moderate distance, and then amplifying the distance using a sampler, e.g., the
ABNNR code construction [IEEE Trans. Inform. Theory, 38(2):509--516, 1992.]. We
show that if the sampler is part of a larger double sampler then the
construction has an efficient list-decoding algorithm and the list decoding
algorithm is oblivious to the base code (i.e., it runs the unique decoder
for in a black box way).
Our list-decoding algorithm works as follows: it uses a local voting scheme
from which it constructs a unique games constraint graph. The constraint graph
is an expander, so we can solve unique games efficiently. These solutions are
the output of the list decoder. This is a novel use of a unique games algorithm
as a subroutine in a decoding procedure, as opposed to the more common
situation in which unique games are used for demonstrating hardness results.
Double samplers and high dimensional expanders are akin to pseudorandom
objects in their utility, but they greatly exceed random objects in their
combinatorial properties. We believe that these objects hold significant
potential for coding theoretic constructions and view this work as
demonstrating the power of double samplers in this context
Simple extractors via constructions of cryptographic pseudo-random generators
Trevisan has shown that constructions of pseudo-random generators from hard
functions (the Nisan-Wigderson approach) also produce extractors. We show that
constructions of pseudo-random generators from one-way permutations (the
Blum-Micali-Yao approach) can be used for building extractors as well. Using
this new technique we build extractors that do not use designs and
polynomial-based error-correcting codes and that are very simple and efficient.
For example, one extractor produces each output bit separately in
time. These extractors work for weak sources with min entropy , for
arbitrary constant , have seed length , and their
output length is .Comment: 21 pages, an extended abstract will appear in Proc. ICALP 2005; small
corrections, some comments and references adde
The Quantum PCP Conjecture
The classical PCP theorem is arguably the most important achievement of
classical complexity theory in the past quarter century. In recent years,
researchers in quantum computational complexity have tried to identify
approaches and develop tools that address the question: does a quantum version
of the PCP theorem hold? The story of this study starts with classical
complexity and takes unexpected turns providing fascinating vistas on the
foundations of quantum mechanics, the global nature of entanglement and its
topological properties, quantum error correction, information theory, and much
more; it raises questions that touch upon some of the most fundamental issues
at the heart of our understanding of quantum mechanics. At this point, the jury
is still out as to whether or not such a theorem holds. This survey aims to
provide a snapshot of the status in this ongoing story, tailored to a general
theory-of-CS audience.Comment: 45 pages, 4 figures, an enhanced version of the SIGACT guest column
from Volume 44 Issue 2, June 201
Complexity Theory
Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness and randomness extraction. Many of the developments are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, representation theory, and the theory of error-correcting codes
Reed-Muller codes for random erasures and errors
This paper studies the parameters for which Reed-Muller (RM) codes over
can correct random erasures and random errors with high probability,
and in particular when can they achieve capacity for these two classical
channels. Necessarily, the paper also studies properties of evaluations of
multi-variate polynomials on random sets of inputs.
For erasures, we prove that RM codes achieve capacity both for very high rate
and very low rate regimes. For errors, we prove that RM codes achieve capacity
for very low rate regimes, and for very high rates, we show that they can
uniquely decode at about square root of the number of errors at capacity.
The proofs of these four results are based on different techniques, which we
find interesting in their own right. In particular, we study the following
questions about , the matrix whose rows are truth tables of all
monomials of degree in variables. What is the most (resp. least)
number of random columns in that define a submatrix having full column
rank (resp. full row rank) with high probability? We obtain tight bounds for
very small (resp. very large) degrees , which we use to show that RM codes
achieve capacity for erasures in these regimes.
Our decoding from random errors follows from the following novel reduction.
For every linear code of sufficiently high rate we construct a new code
, also of very high rate, such that for every subset of coordinates, if
can recover from erasures in , then can recover from errors in .
Specializing this to RM codes and using our results for erasures imply our
result on unique decoding of RM codes at high rate.
Finally, two of our capacity achieving results require tight bounds on the
weight distribution of RM codes. We obtain such bounds extending the recent
\cite{KLP} bounds from constant degree to linear degree polynomials
Quantum learning algorithms imply circuit lower bounds
We establish the first general connection between the design of quantum
algorithms and circuit lower bounds. Specifically, let be a
class of polynomial-size concepts, and suppose that can be
PAC-learned with membership queries under the uniform distribution with error
by a time quantum algorithm. We prove that if , then , where
is an exponential-time analogue of
. This result is optimal in both and , since it is
not hard to learn any class of functions in (classical) time (with no error), or in quantum time with error at
most via Fourier sampling. In other words, even a
marginal improvement on these generic learning algorithms would lead to major
consequences in complexity theory.
Our proof builds on several works in learning theory, pseudorandomness, and
computational complexity, and crucially, on a connection between non-trivial
classical learning algorithms and circuit lower bounds established by Oliveira
and Santhanam (CCC 2017). Extending their approach to quantum learning
algorithms turns out to create significant challenges. To achieve that, we show
among other results how pseudorandom generators imply learning-to-lower-bound
connections in a generic fashion, construct the first conditional pseudorandom
generator secure against uniform quantum computations, and extend the local
list-decoding algorithm of Impagliazzo, Jaiswal, Kabanets and Wigderson (SICOMP
2010) to quantum circuits via a delicate analysis. We believe that these
contributions are of independent interest and might find other applications
- ā¦