378 research outputs found
Finding Significant Fourier Coefficients: Clarifications, Simplifications, Applications and Limitations
Ideas from Fourier analysis have been used in cryptography for the last three
decades. Akavia, Goldwasser and Safra unified some of these ideas to give a
complete algorithm that finds significant Fourier coefficients of functions on
any finite abelian group. Their algorithm stimulated a lot of interest in the
cryptography community, especially in the context of `bit security'. This
manuscript attempts to be a friendly and comprehensive guide to the tools and
results in this field. The intended readership is cryptographers who have heard
about these tools and seek an understanding of their mechanics and their
usefulness and limitations. A compact overview of the algorithm is presented
with emphasis on the ideas behind it. We show how these ideas can be extended
to a `modulus-switching' variant of the algorithm. We survey some applications
of this algorithm, and explain that several results should be taken in the
right context. In particular, we point out that some of the most important bit
security problems are still open. Our original contributions include: a
discussion of the limitations on the usefulness of these tools; an answer to an
open question about the modular inversion hidden number problem
On the hardness of learning sparse parities
This work investigates the hardness of computing sparse solutions to systems
of linear equations over F_2. Consider the k-EvenSet problem: given a
homogeneous system of linear equations over F_2 on n variables, decide if there
exists a nonzero solution of Hamming weight at most k (i.e. a k-sparse
solution). While there is a simple O(n^{k/2})-time algorithm for it,
establishing fixed parameter intractability for k-EvenSet has been a notorious
open problem. Towards this goal, we show that unless k-Clique can be solved in
n^{o(k)} time, k-EvenSet has no poly(n)2^{o(sqrt{k})} time algorithm and no
polynomial time algorithm when k = (log n)^{2+eta} for any eta > 0.
Our work also shows that the non-homogeneous generalization of the problem --
which we call k-VectorSum -- is W[1]-hard on instances where the number of
equations is O(k log n), improving on previous reductions which produced
Omega(n) equations. We also show that for any constant eps > 0, given a system
of O(exp(O(k))log n) linear equations, it is W[1]-hard to decide if there is a
k-sparse linear form satisfying all the equations or if every function on at
most k-variables (k-junta) satisfies at most (1/2 + eps)-fraction of the
equations. In the setting of computational learning, this shows hardness of
approximate non-proper learning of k-parities. In a similar vein, we use the
hardness of k-EvenSet to show that that for any constant d, unless k-Clique can
be solved in n^{o(k)} time there is no poly(m, n)2^{o(sqrt{k}) time algorithm
to decide whether a given set of m points in F_2^n satisfies: (i) there exists
a non-trivial k-sparse homogeneous linear form evaluating to 0 on all the
points, or (ii) any non-trivial degree d polynomial P supported on at most k
variables evaluates to zero on approx. Pr_{F_2^n}[P(z) = 0] fraction of the
points i.e., P is fooled by the set of points
The Computational Complexity of Linear Optics
We give new evidence that quantum computers -- moreover, rudimentary quantum
computers built entirely out of linear-optical elements -- cannot be
efficiently simulated by classical computers. In particular, we define a model
of computation in which identical photons are generated, sent through a
linear-optical network, then nonadaptively measured to count the number of
photons in each mode. This model is not known or believed to be universal for
quantum computation, and indeed, we discuss the prospects for realizing the
model using current technology. On the other hand, we prove that the model is
able to solve sampling problems and search problems that are classically
intractable under plausible assumptions. Our first result says that, if there
exists a polynomial-time classical algorithm that samples from the same
probability distribution as a linear-optical network, then P^#P=BPP^NP, and
hence the polynomial hierarchy collapses to the third level. Unfortunately,
this result assumes an extremely accurate simulation. Our main result suggests
that even an approximate or noisy classical simulation would already imply a
collapse of the polynomial hierarchy. For this, we need two unproven
conjectures: the "Permanent-of-Gaussians Conjecture", which says that it is
#P-hard to approximate the permanent of a matrix A of independent N(0,1)
Gaussian entries, with high probability over A; and the "Permanent
Anti-Concentration Conjecture", which says that |Per(A)|>=sqrt(n!)/poly(n) with
high probability over A. We present evidence for these conjectures, both of
which seem interesting even apart from our application. This paper does not
assume knowledge of quantum optics. Indeed, part of its goal is to develop the
beautiful theory of noninteracting bosons underlying our model, and its
connection to the permanent function, in a self-contained way accessible to
theoretical computer scientists.Comment: 94 pages, 4 figure
Some Applications of Coding Theory in Computational Complexity
Error-correcting codes and related combinatorial constructs play an important
role in several recent (and old) results in computational complexity theory. In
this paper we survey results on locally-testable and locally-decodable
error-correcting codes, and their applications to complexity theory and to
cryptography.
Locally decodable codes are error-correcting codes with sub-linear time
error-correcting algorithms. They are related to private information retrieval
(a type of cryptographic protocol), and they are used in average-case
complexity and to construct ``hard-core predicates'' for one-way permutations.
Locally testable codes are error-correcting codes with sub-linear time
error-detection algorithms, and they are the combinatorial core of
probabilistically checkable proofs
Reed-Muller codes for random erasures and errors
This paper studies the parameters for which Reed-Muller (RM) codes over
can correct random erasures and random errors with high probability,
and in particular when can they achieve capacity for these two classical
channels. Necessarily, the paper also studies properties of evaluations of
multi-variate polynomials on random sets of inputs.
For erasures, we prove that RM codes achieve capacity both for very high rate
and very low rate regimes. For errors, we prove that RM codes achieve capacity
for very low rate regimes, and for very high rates, we show that they can
uniquely decode at about square root of the number of errors at capacity.
The proofs of these four results are based on different techniques, which we
find interesting in their own right. In particular, we study the following
questions about , the matrix whose rows are truth tables of all
monomials of degree in variables. What is the most (resp. least)
number of random columns in that define a submatrix having full column
rank (resp. full row rank) with high probability? We obtain tight bounds for
very small (resp. very large) degrees , which we use to show that RM codes
achieve capacity for erasures in these regimes.
Our decoding from random errors follows from the following novel reduction.
For every linear code of sufficiently high rate we construct a new code
, also of very high rate, such that for every subset of coordinates, if
can recover from erasures in , then can recover from errors in .
Specializing this to RM codes and using our results for erasures imply our
result on unique decoding of RM codes at high rate.
Finally, two of our capacity achieving results require tight bounds on the
weight distribution of RM codes. We obtain such bounds extending the recent
\cite{KLP} bounds from constant degree to linear degree polynomials
- …