105 research outputs found

    Three Query Locally Decodable Codes with Higher Correctness Require Exponential Length

    Get PDF
    Locally decodable codes are error correcting codes with the extra property that, in order to retrieve the correct value of just one position of the input with high probability, it is sufficient to read a small number of positions of the corresponding, possibly corrupted codeword. A breakthrough result by Yekhanin showed that 3-query linear locally decodable codes may have subexponential length. The construction of Yekhanin, and the three query constructions that followed, achieve correctness only up to a certain limit which is 13delta1 - 3 delta for nonbinary codes, where an adversary is allowed to corrupt up to delta fraction of the codeword. The largest correctness for a subexponential length 3-query binary code is achieved in a construction by Woodruff, and it is below 1 - 3 delta. We show that achieving slightly larger correctness (as a function of deltadelta) requires exponential codeword length for 3-query codes. Previously, there were no larger than quadratic lower bounds known for locally decodable codes with more than 2 queries, even in the case of 3-query linear codes. Our results hold for linear codes over arbitrary finite fields and for binary nonlinear codes. Considering larger number of queries, we obtain lower bounds for q-query codes for q>3, under certain assumptions on the decoding algorithm that have been commonly used in previous constructions. We also prove bounds on the largest correctness achievable by these decoding algorithms, regardless of the length of the code. Our results explain the limitations on correctness in previous constructions using such decoding algorithms. In addition, our results imply tradeoffs on the parameters of error correcting data structures

    Some Applications of Coding Theory in Computational Complexity

    Full text link
    Error-correcting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locally-testable and locally-decodable error-correcting codes, and their applications to complexity theory and to cryptography. Locally decodable codes are error-correcting codes with sub-linear time error-correcting algorithms. They are related to private information retrieval (a type of cryptographic protocol), and they are used in average-case complexity and to construct ``hard-core predicates'' for one-way permutations. Locally testable codes are error-correcting codes with sub-linear time error-detection algorithms, and they are the combinatorial core of probabilistically checkable proofs

    2-Server PIR with sub-polynomial communication

    Full text link
    A 2-server Private Information Retrieval (PIR) scheme allows a user to retrieve the iith bit of an nn-bit database replicated among two servers (which do not communicate) while not revealing any information about ii to either server. In this work we construct a 1-round 2-server PIR with total communication cost nO(loglogn/logn)n^{O({\sqrt{\log\log n/\log n}})}. This improves over the currently known 2-server protocols which require O(n1/3)O(n^{1/3}) communication and matches the communication cost of known 3-server PIR schemes. Our improvement comes from reducing the number of servers in existing protocols, based on Matching Vector Codes, from 3 or 4 servers to 2. This is achieved by viewing these protocols in an algebraic way (using polynomial interpolation) and extending them using partial derivatives

    Lower bounds for constant query affine-invariant LCCs and LTCs

    Get PDF
    Affine-invariant codes are codes whose coordinates form a vector space over a finite field and which are invariant under affine transformations of the coordinate space. They form a natural, well-studied class of codes; they include popular codes such as Reed-Muller and Reed-Solomon. A particularly appealing feature of affine-invariant codes is that they seem well-suited to admit local correctors and testers. In this work, we give lower bounds on the length of locally correctable and locally testable affine-invariant codes with constant query complexity. We show that if a code CΣKn\mathcal{C} \subset \Sigma^{\mathbb{K}^n} is an rr-query locally correctable code (LCC), where K\mathbb{K} is a finite field and Σ\Sigma is a finite alphabet, then the number of codewords in C\mathcal{C} is at most exp(OK,r,Σ(nr1))\exp(O_{\mathbb{K}, r, |\Sigma|}(n^{r-1})). Also, we show that if CΣKn\mathcal{C} \subset \Sigma^{\mathbb{K}^n} is an rr-query locally testable code (LTC), then the number of codewords in C\mathcal{C} is at most exp(OK,r,Σ(nr2))\exp(O_{\mathbb{K}, r, |\Sigma|}(n^{r-2})). The dependence on nn in these bounds is tight for constant-query LCCs/LTCs, since Guo, Kopparty and Sudan (ITCS `13) construct affine-invariant codes via lifting that have the same asymptotic tradeoffs. Note that our result holds for non-linear codes, whereas previously, Ben-Sasson and Sudan (RANDOM `11) assumed linearity to derive similar results. Our analysis uses higher-order Fourier analysis. In particular, we show that the codewords corresponding to an affine-invariant LCC/LTC must be far from each other with respect to Gowers norm of an appropriate order. This then allows us to bound the number of codewords, using known decomposition theorems which approximate any bounded function in terms of a finite number of low-degree non-classical polynomials, upto a small error in the Gowers norm

    Locally Decodable/Correctable Codes for Insertions and Deletions

    Get PDF
    Recent efforts in coding theory have focused on building codes for insertions and deletions, called insdel codes, with optimal trade-offs between their redundancy and their error-correction capabilities, as well as efficient encoding and decoding algorithms. In many applications, polynomial running time may still be prohibitively expensive, which has motivated the study of codes with super-efficient decoding algorithms. These have led to the well-studied notions of Locally Decodable Codes (LDCs) and Locally Correctable Codes (LCCs). Inspired by these notions, Ostrovsky and Paskin-Cherniavsky (Information Theoretic Security, 2015) generalized Hamming LDCs to insertions and deletions. To the best of our knowledge, these are the only known results that study the analogues of Hamming LDCs in channels performing insertions and deletions. Here we continue the study of insdel codes that admit local algorithms. Specifically, we reprove the results of Ostrovsky and Paskin-Cherniavsky for insdel LDCs using a different set of techniques. We also observe that the techniques extend to constructions of LCCs. Specifically, we obtain insdel LDCs and LCCs from their Hamming LDCs and LCCs analogues, respectively. The rate and error-correction capability blow up only by a constant factor, while the query complexity blows up by a poly log factor in the block length. Since insdel locally decodable/correctble codes are scarcely studied in the literature, we believe our results and techniques may lead to further research. In particular, we conjecture that constant-query insdel LDCs/LCCs do not exist

    Complexity Theory

    Get PDF
    Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness and randomness extraction. Many of the developments are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, representation theory, and the theory of error-correcting codes

    On Relaxed Locally Decodable Codes for Hamming and Insertion-Deletion Errors

    Get PDF
    Locally Decodable Codes (LDCs) are error-correcting codes C:ΣnΣmC:\Sigma^n\rightarrow \Sigma^m with super-fast decoding algorithms. They are important mathematical objects in many areas of theoretical computer science, yet the best constructions so far have codeword length mm that is super-polynomial in nn, for codes with constant query complexity and constant alphabet size. In a very surprising result, Ben-Sasson et al. showed how to construct a relaxed version of LDCs (RLDCs) with constant query complexity and almost linear codeword length over the binary alphabet, and used them to obtain significantly-improved constructions of Probabilistically Checkable Proofs. In this work, we study RLDCs in the standard Hamming-error setting, and introduce their variants in the insertion and deletion (Insdel) error setting. Insdel LDCs were first studied by Ostrovsky and Paskin-Cherniavsky, and are further motivated by recent advances in DNA random access bio-technologies, in which the goal is to retrieve individual files from a DNA storage database. Our first result is an exponential lower bound on the length of Hamming RLDCs making 2 queries, over the binary alphabet. This answers a question explicitly raised by Gur and Lachish. Our result exhibits a "phase-transition"-type behavior on the codeword length for constant-query Hamming RLDCs. We further define two variants of RLDCs in the Insdel-error setting, a weak and a strong version. On the one hand, we construct weak Insdel RLDCs with with parameters matching those of the Hamming variants. On the other hand, we prove exponential lower bounds for strong Insdel RLDCs. These results demonstrate that, while these variants are equivalent in the Hamming setting, they are significantly different in the insdel setting. Our results also prove a strict separation between Hamming RLDCs and Insdel RLDCs

    Delegating computation reliably : paradigms and constructions

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 285-297).In an emerging computing paradigm, computational capabilities, from processing power to storage capacities, are offered to users over communication networks as a service. This new paradigm holds enormous promise for increasing the utility of computationally weak devices. A natural approach is for weak devices to delegate expensive tasks, such as storing a large file or running a complex computation, to more powerful entities (say servers) connected to the same network. While the delegation approach seems promising, it raises an immediate concern: when and how can a weak device verify that a computational task was completed correctly? This practically motivated question touches on foundational questions in cryptography and complexity theory. The focus of this thesis is verifying the correctness of delegated computations. We construct efficient protocols (interactive proofs) for delegating computational tasks. In particular, we present: e A protocol for delegating any computation, where the work needed to verify the correctness of the output is linear in the input length, polynomial in the computation's depth, and only poly-logarithmic in the computation's size. The space needed for verification is only logarithmic in the computation size. Thus, for any computation of polynomial size and poly-logarithmic depth (the rich complexity class N/C), the work required to verify the correctness of the output is only quasi-linear in the input length. The work required to prove the output's correctness is only polynomial in the original computation's size. This protocol also has applications to constructing one-round arguments for delegating computation, and efficient zero-knowledge proofs. * A general transformation, reducing the parallel running time (or computation depth) of the verifier in protocols for delegating computation (interactive proofs) to be constant. Next, we explore the power of the delegation paradigm in settings where mutually distrustful parties interact. In particular, we consider the settings of checking the correctness of computer programs and of designing error-correcting codes. We show: * A new methodology for checking the correctness of programs (program checking), in which work is delegated from the program checker to the untrusted program being checked. Using this methodology we obtain program checkers for an entire complexity class (the class of N/C¹-computations that are WNC-hard), and for a slew of specific functions such as matrix multiplication, inversion, determinant and rank, as well as graph functions such as connectivity, perfect matching and bounded-degree graph isomorphism. * A methodology for designing error-correcting codes with efficient decoding procedures, in which work is delegated from the decoder to the encoder. We use this methodology to obtain constant-depth (AC⁰) locally decodable and locally-list decodable codes. We also show that the parameters of these codes are optimal (up to polynomial factors) for constant-depth decoding.by Guy N. Rothblum.Ph.D

    Lower Bounds on Time-Space Trade-Offs for Approximate Near Neighbors

    Get PDF
    We show tight lower bounds for the entire trade-off between space and query time for the Approximate Near Neighbor search problem. Our lower bounds hold in a restricted model of computation, which captures all hashing-based approaches. In articular, our lower bound matches the upper bound recently shown in [Laarhoven 2015] for the random instance on a Euclidean sphere (which we show in fact extends to the entire space Rd\mathbb{R}^d using the techniques from [Andoni, Razenshteyn 2015]). We also show tight, unconditional cell-probe lower bounds for one and two probes, improving upon the best known bounds from [Panigrahy, Talwar, Wieder 2010]. In particular, this is the first space lower bound (for any static data structure) for two probes which is not polynomially smaller than for one probe. To show the result for two probes, we establish and exploit a connection to locally-decodable codes.Comment: 47 pages, 2 figures; v2: substantially revised introduction, lots of small corrections; subsumed by arXiv:1608.03580 [cs.DS] (along with arXiv:1511.07527 [cs.DS]

    Optimal Hashing-based Time-Space Trade-offs for Approximate Near Neighbors

    Full text link
    [See the paper for the full abstract.] We show tight upper and lower bounds for time-space trade-offs for the cc-Approximate Near Neighbor Search problem. For the dd-dimensional Euclidean space and nn-point datasets, we develop a data structure with space n1+ρu+o(1)+O(dn)n^{1 + \rho_u + o(1)} + O(dn) and query time nρq+o(1)+dno(1)n^{\rho_q + o(1)} + d n^{o(1)} for every ρu,ρq0\rho_u, \rho_q \geq 0 such that: \begin{equation} c^2 \sqrt{\rho_q} + (c^2 - 1) \sqrt{\rho_u} = \sqrt{2c^2 - 1}. \end{equation} This is the first data structure that achieves sublinear query time and near-linear space for every approximation factor c>1c > 1, improving upon [Kapralov, PODS 2015]. The data structure is a culmination of a long line of work on the problem for all space regimes; it builds on Spherical Locality-Sensitive Filtering [Becker, Ducas, Gama, Laarhoven, SODA 2016] and data-dependent hashing [Andoni, Indyk, Nguyen, Razenshteyn, SODA 2014] [Andoni, Razenshteyn, STOC 2015]. Our matching lower bounds are of two types: conditional and unconditional. First, we prove tightness of the whole above trade-off in a restricted model of computation, which captures all known hashing-based approaches. We then show unconditional cell-probe lower bounds for one and two probes that match the above trade-off for ρq=0\rho_q = 0, improving upon the best known lower bounds from [Panigrahy, Talwar, Wieder, FOCS 2010]. In particular, this is the first space lower bound (for any static data structure) for two probes which is not polynomially smaller than the one-probe bound. To show the result for two probes, we establish and exploit a connection to locally-decodable codes.Comment: 62 pages, 5 figures; a merger of arXiv:1511.07527 [cs.DS] and arXiv:1605.02701 [cs.DS], which subsumes both of the preprints. New version contains more elaborated proofs and fixed some typo
    corecore