35 research outputs found
Locally Testable Codes and Cayley Graphs
We give two new characterizations of (\F_2-linear) locally testable
error-correcting codes in terms of Cayley graphs over \F_2^h:
\begin{enumerate} \item A locally testable code is equivalent to a Cayley
graph over \F_2^h whose set of generators is significantly larger than
and has no short linear dependencies, but yields a shortest-path metric that
embeds into with constant distortion. This extends and gives a
converse to a result of Khot and Naor (2006), which showed that codes with
large dual distance imply Cayley graphs that have no low-distortion embeddings
into .
\item A locally testable code is equivalent to a Cayley graph over \F_2^h
that has significantly more than eigenvalues near 1, which have no short
linear dependencies among them and which "explain" all of the large
eigenvalues. This extends and gives a converse to a recent construction of
Barak et al. (2012), which showed that locally testable codes imply Cayley
graphs that are small-set expanders but have many large eigenvalues.
\end{enumerate}Comment: 22 page
Learning with Errors is easy with quantum samples
Learning with Errors is one of the fundamental problems in computational
learning theory and has in the last years become the cornerstone of
post-quantum cryptography. In this work, we study the quantum sample complexity
of Learning with Errors and show that there exists an efficient quantum
learning algorithm (with polynomial sample and time complexity) for the
Learning with Errors problem where the error distribution is the one used in
cryptography. While our quantum learning algorithm does not break the LWE-based
encryption schemes proposed in the cryptography literature, it does have some
interesting implications for cryptography: first, when building an LWE-based
scheme, one needs to be careful about the access to the public-key generation
algorithm that is given to the adversary; second, our algorithm shows a
possible way for attacking LWE-based encryption by using classical samples to
approximate the quantum sample state, since then using our quantum learning
algorithm would solve LWE
Minimum distance of error correcting codes versus encoding complexity, symmetry, and pseudorandomness
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.Includes bibliographical references (leaves 207-214).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.We study the minimum distance of binary error correcting codes from the following perspectives: * The problem of deriving bounds on the minimum distance of a code given constraints on the computational complexity of its encoder. * The minimum distance of linear codes that are symmetric in the sense of being invariant under the action of a group on the bits of the codewords. * The derandomization capabilities of probability measures on the Hamming cube based on binary linear codes with good distance properties, and their variations. Highlights of our results include: * A general theorem that asserts that if the encoder uses linear time and sub-linear memory in the general binary branching program model, then the minimum distance of the code cannot grow linearly with the block length when the rate is nonvanishing. * New upper bounds on the minimum distance of various types of Turbo-like codes. * The first ensemble of asymptotically good Turbo like codes. We prove that depth-three serially concatenated Turbo codes can be asymptotically good. * The first ensemble of asymptotically good codes that are ideals in the group algebra of a group. We argue that, for infinitely many block lengths, a random ideal in the group algebra of the dihedral group is an asymptotically good rate half code with a high probability. * An explicit rate-half code whose codewords are in one-to-one correspondence with special hyperelliptic curves over a finite field of prime order where the number of zeros of a codeword corresponds to the number of rational points.(cont.) * A sharp O(k-1/2) upper bound on the probability that a random binary string generated according to a k-wise independent probability measure has any given weight. * An assertion saying that any sufficiently log-wise independent probability measure looks random to all polynomially small read-once DNF formulas. * An elaborate study of the problem of derandomizability of ACâ‚€ by any sufficiently polylog-wise independent probability measure. * An elaborate study of the problem of approximability of high-degree parity functions on binary linear codes by low-degree polynomials with coefficients in fields of odd characteristics.by Louay M.J. Bazzi.Ph.D
The Planted -SUM Problem: Algorithms, Lower Bounds, Hardness Amplification, and Cryptography
In the average-case -SUM problem, given integers chosen uniformly at random from , the objective is to find a set of numbers that sum to 0 modulo (this set is called a solution ). In the related -XOR problem, given uniformly random Boolean vectors of length log , the objective is to find a set of of them whose bitwise-XOR is the all-zero vector. Both of these problems have widespread applications in the study of fine-grained complexity and cryptanalysis.
The feasibility and complexity of these problems depends on the relative values of , , and . The dense regime of , where solutions exist with high probability, is quite well-understood and we have several non-trivial algorithms and hardness conjectures here. Much less is known about the sparse regime of , where solutions are unlikely to exist. The best answers we have for many fundamental questions here are limited to whatever carries over from the dense or worst-case settings.
We study the planted -SUM and -XOR problems in the sparse regime. In these problems, a random solution is planted in a randomly generated instance and has to be recovered. As increases past , these planted solutions tend to be the only solutions with increasing probability, potentially becoming easier to find. We show several results about the complexity and applications of these problems.
Conditional Lower Bounds. Assuming established conjectures about the hardness of average-case (non-planted) -SUM when , we show non-trivial lower bounds on the running time of algorithms for planted -SUM when . We show the same for -XOR as well.
Search-to-Decision Reduction. For any , suppose there is an algorithm running in time that can distinguish between a random -SUM instance and a random instance with a planted solution, with success probability . Then, for the same , there is an algorithm running in time that solves planted -SUM with constant probability. The same holds for -XOR as well.
Hardness Amplification. For any , if an algorithm running in time solves planted -XOR with success probability , then there is an algorithm running in time that solves it with probability . We show this by constructing a rapidly mixing random walk over -XOR instances that preserves the planted solution.
Cryptography. For some , the hardness of the -XOR problem can be used to construct Public-Key Encryption (PKE) assuming that the Learning Parity with Noise (LPN) problem with constant noise rate is hard for -time algorithms. Previous constructions of PKE from LPN needed either a noise rate of , or hardness for -time algorithms.
Algorithms. For any , there is a constant (independent of ) and an algorithm running in time that, for any , solves planted -SUM with success probability . We get this by showing an average-case reduction from planted -SUM to the Subset Sum problem. For , the best known algorithms are still the worst-case -SUM algorithms running in time