27 research outputs found

    Relaxed Locally Correctable Codes

    Get PDF
    Locally decodable codes (LDCs) and locally correctable codes (LCCs) are error-correcting codes in which individual bits of the message and codeword, respectively, can be recovered by querying only few bits from a noisy codeword. These codes have found numerous applications both in theory and in practice. A natural relaxation of LDCs, introduced by Ben-Sasson et al. (SICOMP, 2006), allows the decoder to reject (i.e., refuse to answer) in case it detects that the codeword is corrupt. They call such a decoder a relaxed decoder and construct a constant-query relaxed LDC with almost-linear blocklength, which is sub-exponentially better than what is known for (full-fledged) LDCs in the constant-query regime. We consider an analogous relaxation for local correction. Thus, a relaxed local corrector reads only few bits from a (possibly) corrupt codeword and either recovers the desired bit of the codeword, or rejects in case it detects a corruption. We give two constructions of relaxed LCCs in two regimes, where the first optimizes the query complexity and the second optimizes the rate: 1. Constant Query Complexity: A relaxed LCC with polynomial blocklength whose corrector only reads a constant number of bits of the codeword. This is a sub-exponential improvement over the best constant query (full-fledged) LCCs that are known. 2. Constant Rate: A relaxed LCC with constant rate (i.e., linear blocklength) with quasi-polylogarithmic query complexity. This is a nearly sub-exponential improvement over the query complexity of a recent (full-fledged) constant-rate LCC of Kopparty et al. (STOC, 2016)

    Relaxed locally correctable codes with improved parameters

    Get PDF
    Locally decodable codes (LDCs) are error-correcting codes C:SigmaktoSigmanC : Sigma^k to Sigma^n that admit a local decoding algorithm that recovers each individual bit of the message by querying only a few bits from a noisy codeword. An important question in this line of research is to understand the optimal trade-off between the query complexity of LDCs and their block length. Despite importance of these objects, the best known constructions of constant query LDCs have super-polynomial length, and there is a significant gap between the best constructions and the known lower bounds in terms of the block length. For many applications it suffices to consider the weaker notion of relaxed LDCs (RLDCs), which allows the local decoding algorithm to abort if by querying a few bits it detects that the input is not a codeword. This relaxation turned out to allow decoding algorithms with constant query complexity for codes with almost linear length. Specifically, [Ben-Sasson et al. (2006)] constructed a qq-query RLDC that encodes a message of length kk using a codeword of block length n=Oq(k1+O(1/sqrtq))n = O_q(k^{1+O(1/sqrt{q})}) for any sufficiently large qq, where Oq(cdot)O_q(cdot) hides some constant that depends only on qq. In this work we improve the parameters of [Ben-Sasson et al. (2006)] by constructing a qq-query RLDC that encodes a message of length kk using a codeword of block length Oq(k1+O(1/q))O_q(k^{1+O(1/q)}) for any sufficiently large qq. This construction matches (up to a multiplicative constant factor) the lower bounds of [Katz and Trevisan (2000), Woodruff (2007)] for constant query LDCs, thus making progress toward understanding the gap between LDCs and RLDCs in the constant query regime. In fact, our construction extends to the stronger notion of relaxed locally correctable codes (RLCCs), introduced in [Gur et al. (2018)], where given a noisy codeword the correcting algorithm either recovers each individual bit of the codeword by only reading a small part of the input, or aborts if the input is detected to be corrupt

    Relaxed Locally Correctable Codes with Improved Parameters

    Get PDF
    Locally decodable codes (LDCs) are error-correcting codes C: ?^k ? ?? that admit a local decoding algorithm that recovers each individual bit of the message by querying only a few bits from a noisy codeword. An important question in this line of research is to understand the optimal trade-off between the query complexity of LDCs and their block length. Despite importance of these objects, the best known constructions of constant query LDCs have super-polynomial length, and there is a significant gap between the best constructions and the known lower bounds in terms of the block length. For many applications it suffices to consider the weaker notion of relaxed LDCs (RLDCs), which allows the local decoding algorithm to abort if by querying a few bits it detects that the input is not a codeword. This relaxation turned out to allow decoding algorithms with constant query complexity for codes with almost linear length. Specifically, [{Ben-Sasson} et al., 2006] constructed a q-query RLDC that encodes a message of length k using a codeword of block length n = O_q(k^{1+O(1/?q)}) for any sufficiently large q, where O_q(?) hides some constant that depends only on q. In this work we improve the parameters of [{Ben-Sasson} et al., 2006] by constructing a q-query RLDC that encodes a message of length k using a codeword of block length O_q(k^{1+O(1/{q})}) for any sufficiently large q. This construction matches (up to a multiplicative constant factor) the lower bounds of [Jonathan Katz and Trevisan, 2000; Woodruff, 2007] for constant query LDCs, thus making progress toward understanding the gap between LDCs and RLDCs in the constant query regime. In fact, our construction extends to the stronger notion of relaxed locally correctable codes (RLCCs), introduced in [Tom Gur et al., 2018], where given a noisy codeword the correcting algorithm either recovers each individual bit of the codeword by only reading a small part of the input, or aborts if the input is detected to be corrupt

    Brief Announcement: Relaxed Locally Correctable Codes in Computationally Bounded Channels

    Get PDF
    We study variants of locally decodable and locally correctable codes in computationally bounded, adversarial channels, under the assumption that collision-resistant hash functions exist, and with no public-key or private-key cryptographic setup. Specifically, we provide constructions of relaxed locally correctable and relaxed locally decodable codes over the binary alphabet, with constant information rate, and poly-logarithmic locality. Our constructions compare favorably with existing schemes built under much stronger cryptographic assumptions, and with their classical analogues in the computationally unbounded, Hamming channel. Our constructions crucially employ collision-resistant hash functions and local expander graphs, extending ideas from recent cryptographic constructions of memory-hard functions

    Relaxed locally correctable codes with nearly-linear block length and constant query complexity

    Get PDF
    Locally correctable codes (LCCs) are codes C: Σk → Σn which admit local algorithms that can correct any individual symbol of a corrupted codeword via a minuscule number of queries. One of the central problems in algorithmic coding theory is to construct O(1)-query LCC with minimal block length. Alas, state-of-the-art of such codes requires exponential block length to admit O(1)-query algorithms for local correction, despite much attention during the last two decades. This lack of progress prompted the study of relaxed LCCs, which allow the correction algorithm to abort (but not err) on small fraction of the locations. This relaxation turned out to allow constant-query correction algorithms for codes with polynomial block length. Specifically, prior work showed that there exist O(1)-query relaxed LCCs that achieve nearly-quartic block length n = k4+α, for an arbitrarily small constant α > 0. We construct an O(1)-query relaxed LCC with nearly-linear block length n = k1+α, for an arbitrarily small constant α > 0. This significantly narrows the gap between the lower bound which states that there are no O(1)-query relaxed LCCs with block length n = k1+o(1). In particular, this resolves an open problem raised by Gur, Ramnarayan, and Rothblum (ITCS 2018)

    Relaxed Local Correctability from Local Testing

    Full text link
    We cement the intuitive connection between relaxed local correctability and local testing by presenting a concrete framework for building a relaxed locally correctable code from any family of linear locally testable codes with sufficiently high rate. When instantiated using the locally testable codes of Dinur et al. (STOC 2022), this framework yields the first asymptotically good relaxed locally correctable and decodable codes with polylogarithmic query complexity, which finally closes the superpolynomial gap between query lower and upper bounds. Our construction combines high-rate locally testable codes of various sizes to produce a code that is locally testable at every scale: we can gradually "zoom in" to any desired codeword index, and a local tester at each step certifies that the next, smaller restriction of the input has low error. Our codes asymptotically inherit the rate and distance of any locally testable code used in the final step of the construction. Therefore, our technique also yields nonexplicit relaxed locally correctable codes with polylogarithmic query complexity that have rate and distance approaching the Gilbert-Varshamov bound.Comment: 18 page

    On Relaxed Locally Decodable Codes for Hamming and Insertion-Deletion Errors

    Get PDF
    Locally Decodable Codes (LDCs) are error-correcting codes C:ΣnΣmC:\Sigma^n\rightarrow \Sigma^m with super-fast decoding algorithms. They are important mathematical objects in many areas of theoretical computer science, yet the best constructions so far have codeword length mm that is super-polynomial in nn, for codes with constant query complexity and constant alphabet size. In a very surprising result, Ben-Sasson et al. showed how to construct a relaxed version of LDCs (RLDCs) with constant query complexity and almost linear codeword length over the binary alphabet, and used them to obtain significantly-improved constructions of Probabilistically Checkable Proofs. In this work, we study RLDCs in the standard Hamming-error setting, and introduce their variants in the insertion and deletion (Insdel) error setting. Insdel LDCs were first studied by Ostrovsky and Paskin-Cherniavsky, and are further motivated by recent advances in DNA random access bio-technologies, in which the goal is to retrieve individual files from a DNA storage database. Our first result is an exponential lower bound on the length of Hamming RLDCs making 2 queries, over the binary alphabet. This answers a question explicitly raised by Gur and Lachish. Our result exhibits a "phase-transition"-type behavior on the codeword length for constant-query Hamming RLDCs. We further define two variants of RLDCs in the Insdel-error setting, a weak and a strong version. On the one hand, we construct weak Insdel RLDCs with with parameters matching those of the Hamming variants. On the other hand, we prove exponential lower bounds for strong Insdel RLDCs. These results demonstrate that, while these variants are equivalent in the Hamming setting, they are significantly different in the insdel setting. Our results also prove a strict separation between Hamming RLDCs and Insdel RLDCs

    On Explicit Constructions of Extremely Depth Robust Graphs

    Get PDF
    A directed acyclic graph G=(V,E)G=(V,E) is said to be (e,d)(e,d)-depth robust if for every subset SVS \subseteq V of Se|S| \leq e nodes the graph GSG-S still contains a directed path of length dd. If the graph is (e,d)(e,d)-depth-robust for any e,de,d such that e+d(1ϵ)Ve+d \leq (1-\epsilon)|V| then the graph is said to be ϵ\epsilon-extreme depth-robust. In the field of cryptography, (extremely) depth-robust graphs with low indegree have found numerous applications including the design of side-channel resistant Memory-Hard Functions, Proofs of Space and Replication, and in the design of Computationally Relaxed Locally Correctable Codes. In these applications, it is desirable to ensure the graphs are locally navigable, i.e., there is an efficient algorithm GetParents\mathsf{GetParents} running in time polylogV\mathrm{polylog} |V| which takes as input a node vVv \in V and returns the set of vv's parents. We give the first explicit construction of locally navigable ϵ\epsilon-extreme depth-robust graphs with indegree O(logV)O(\log |V|). Previous constructions of ϵ\epsilon-extreme depth-robust graphs either had indegree ω~(log2V)\tilde{\omega}(\log^2 |V|) or were not explicit.Comment: 12 pages, 1 figure. This is the full version of the paper published at STACS 2022. We noticed a mistake in the references for the computational intractability of the depth robustness of the graphs and fixed i

    Approximating Cumulative Pebbling Cost Is Unique Games Hard

    Get PDF
    The cumulative pebbling complexity of a directed acyclic graph GG is defined as cc(G)=minPiPi\mathsf{cc}(G) = \min_P \sum_i |P_i|, where the minimum is taken over all legal (parallel) black pebblings of GG and Pi|P_i| denotes the number of pebbles on the graph during round ii. Intuitively, cc(G)\mathsf{cc}(G) captures the amortized Space-Time complexity of pebbling mm copies of GG in parallel. The cumulative pebbling complexity of a graph GG is of particular interest in the field of cryptography as cc(G)\mathsf{cc}(G) is tightly related to the amortized Area-Time complexity of the Data-Independent Memory-Hard Function (iMHF) fG,Hf_{G,H} [AS15] defined using a constant indegree directed acyclic graph (DAG) GG and a random oracle H()H(\cdot). A secure iMHF should have amortized Space-Time complexity as high as possible, e.g., to deter brute-force password attacker who wants to find xx such that fG,H(x)=hf_{G,H}(x) = h. Thus, to analyze the (in)security of a candidate iMHF fG,Hf_{G,H}, it is crucial to estimate the value cc(G)\mathsf{cc}(G) but currently, upper and lower bounds for leading iMHF candidates differ by several orders of magnitude. Blocki and Zhou recently showed that it is NP\mathsf{NP}-Hard to compute cc(G)\mathsf{cc}(G), but their techniques do not even rule out an efficient (1+ε)(1+\varepsilon)-approximation algorithm for any constant ε>0\varepsilon>0. We show that for any constant c>0c > 0, it is Unique Games hard to approximate cc(G)\mathsf{cc}(G) to within a factor of cc. (See the paper for the full abstract.)Comment: 28 pages, updated figures and corrected typo
    corecore