35 research outputs found

    Locally Testable Codes and Cayley Graphs

    Full text link
    We give two new characterizations of (\F_2-linear) locally testable error-correcting codes in terms of Cayley graphs over \F_2^h: \begin{enumerate} \item A locally testable code is equivalent to a Cayley graph over \F_2^h whose set of generators is significantly larger than hh and has no short linear dependencies, but yields a shortest-path metric that embeds into â„“1\ell_1 with constant distortion. This extends and gives a converse to a result of Khot and Naor (2006), which showed that codes with large dual distance imply Cayley graphs that have no low-distortion embeddings into â„“1\ell_1. \item A locally testable code is equivalent to a Cayley graph over \F_2^h that has significantly more than hh eigenvalues near 1, which have no short linear dependencies among them and which "explain" all of the large eigenvalues. This extends and gives a converse to a recent construction of Barak et al. (2012), which showed that locally testable codes imply Cayley graphs that are small-set expanders but have many large eigenvalues. \end{enumerate}Comment: 22 page

    Learning with Errors is easy with quantum samples

    Full text link
    Learning with Errors is one of the fundamental problems in computational learning theory and has in the last years become the cornerstone of post-quantum cryptography. In this work, we study the quantum sample complexity of Learning with Errors and show that there exists an efficient quantum learning algorithm (with polynomial sample and time complexity) for the Learning with Errors problem where the error distribution is the one used in cryptography. While our quantum learning algorithm does not break the LWE-based encryption schemes proposed in the cryptography literature, it does have some interesting implications for cryptography: first, when building an LWE-based scheme, one needs to be careful about the access to the public-key generation algorithm that is given to the adversary; second, our algorithm shows a possible way for attacking LWE-based encryption by using classical samples to approximate the quantum sample state, since then using our quantum learning algorithm would solve LWE

    Minimum distance of error correcting codes versus encoding complexity, symmetry, and pseudorandomness

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.Includes bibliographical references (leaves 207-214).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.We study the minimum distance of binary error correcting codes from the following perspectives: * The problem of deriving bounds on the minimum distance of a code given constraints on the computational complexity of its encoder. * The minimum distance of linear codes that are symmetric in the sense of being invariant under the action of a group on the bits of the codewords. * The derandomization capabilities of probability measures on the Hamming cube based on binary linear codes with good distance properties, and their variations. Highlights of our results include: * A general theorem that asserts that if the encoder uses linear time and sub-linear memory in the general binary branching program model, then the minimum distance of the code cannot grow linearly with the block length when the rate is nonvanishing. * New upper bounds on the minimum distance of various types of Turbo-like codes. * The first ensemble of asymptotically good Turbo like codes. We prove that depth-three serially concatenated Turbo codes can be asymptotically good. * The first ensemble of asymptotically good codes that are ideals in the group algebra of a group. We argue that, for infinitely many block lengths, a random ideal in the group algebra of the dihedral group is an asymptotically good rate half code with a high probability. * An explicit rate-half code whose codewords are in one-to-one correspondence with special hyperelliptic curves over a finite field of prime order where the number of zeros of a codeword corresponds to the number of rational points.(cont.) * A sharp O(k-1/2) upper bound on the probability that a random binary string generated according to a k-wise independent probability measure has any given weight. * An assertion saying that any sufficiently log-wise independent probability measure looks random to all polynomially small read-once DNF formulas. * An elaborate study of the problem of derandomizability of ACâ‚€ by any sufficiently polylog-wise independent probability measure. * An elaborate study of the problem of approximability of high-degree parity functions on binary linear codes by low-degree polynomials with coefficients in fields of odd characteristics.by Louay M.J. Bazzi.Ph.D

    35th Symposium on Theoretical Aspects of Computer Science: STACS 2018, February 28-March 3, 2018, Caen, France

    Get PDF

    The Planted kk-SUM Problem: Algorithms, Lower Bounds, Hardness Amplification, and Cryptography

    Get PDF
    In the average-case kk-SUM problem, given rr integers chosen uniformly at random from {0,…,M−1}\{0,\ldots,M-1\}, the objective is to find a set of kk numbers that sum to 0 modulo MM (this set is called a solution ). In the related kk-XOR problem, given kk uniformly random Boolean vectors of length log MM, the objective is to find a set of kk of them whose bitwise-XOR is the all-zero vector. Both of these problems have widespread applications in the study of fine-grained complexity and cryptanalysis. The feasibility and complexity of these problems depends on the relative values of kk, rr, and MM. The dense regime of M≤rkM \leq r^k, where solutions exist with high probability, is quite well-understood and we have several non-trivial algorithms and hardness conjectures here. Much less is known about the sparse regime of M≫rkM\gg r^k, where solutions are unlikely to exist. The best answers we have for many fundamental questions here are limited to whatever carries over from the dense or worst-case settings. We study the planted kk-SUM and kk-XOR problems in the sparse regime. In these problems, a random solution is planted in a randomly generated instance and has to be recovered. As MM increases past rkr^k, these planted solutions tend to be the only solutions with increasing probability, potentially becoming easier to find. We show several results about the complexity and applications of these problems. Conditional Lower Bounds. Assuming established conjectures about the hardness of average-case (non-planted) kk-SUM when M=rkM = r^k, we show non-trivial lower bounds on the running time of algorithms for planted kk-SUM when rk≤M≤r2kr^k\leq M\leq r^{2k}. We show the same for kk-XOR as well. Search-to-Decision Reduction. For any M>rkM>r^k, suppose there is an algorithm running in time TT that can distinguish between a random kk-SUM instance and a random instance with a planted solution, with success probability (1−o(1))(1-o(1)). Then, for the same MM, there is an algorithm running in time O~(T)\tilde{O}(T) that solves planted kk-SUM with constant probability. The same holds for kk-XOR as well. Hardness Amplification. For any M≥rkM \geq r^k, if an algorithm running in time TT solves planted kk-XOR with success probability Ω(1/polylog(r))\Omega(1/\text{polylog}(r)), then there is an algorithm running in time O~(T)\tilde O(T) that solves it with probability (1−o(1))(1-o(1)). We show this by constructing a rapidly mixing random walk over kk-XOR instances that preserves the planted solution. Cryptography. For some M≤2polylog(r)M \leq 2^{\text{polylog}(r)}, the hardness of the kk-XOR problem can be used to construct Public-Key Encryption (PKE) assuming that the Learning Parity with Noise (LPN) problem with constant noise rate is hard for 2n0.012^{n^{0.01}}-time algorithms. Previous constructions of PKE from LPN needed either a noise rate of O(1/n)O(1/\sqrt{n}), or hardness for 2n0.52^{n^{0.5}}-time algorithms. Algorithms. For any M≥2r2M \geq 2^{r^2}, there is a constant cc (independent of kk) and an algorithm running in time rcr^c that, for any kk, solves planted kk-SUM with success probability Ω(1/8k)\Omega(1/8^k). We get this by showing an average-case reduction from planted kk-SUM to the Subset Sum problem. For rk≤M≪2r2r^k \leq M \ll 2^{r^2}, the best known algorithms are still the worst-case kk-SUM algorithms running in time r⌈k/2⌉−o(1)r^{\lceil{k/2}\rceil-o(1)}
    corecore