390 research outputs found

    Computationally Relaxed Locally Decodable Codes, Revisited

    Full text link
    We revisit computationally relaxed locally decodable codes (crLDCs) (Blocki et al., Trans. Inf. Theory '21) and give two new constructions. Our first construction is a Hamming crLDC that is conceptually simpler than prior constructions, leveraging digital signature schemes and an appropriately chosen Hamming code. Our second construction is an extension of our Hamming crLDC to handle insertion-deletion (InsDel) errors, yielding an InsDel crLDC. This extension crucially relies on the noisy binary search techniques of Block et al. (FSTTCS '20) to handle InsDel errors. Both crLDC constructions have binary codeword alphabets, are resilient to a constant fraction of Hamming and InsDel errors, respectively, and under suitable parameter choices have poly-logarithmic locality and encoding length linear in the message length and polynomial in the security parameter. These parameters compare favorably to prior constructions in the poly-logarithmic locality regime

    Honest Majority Multi-Prover Interactive Arguments

    Get PDF
    Interactive arguments, and their (succinct) non-interactive and zero-knowledge counterparts, have seen growing deployment in real world applications in recent years. Unfortunately, for large and complex statements, concrete proof generation costs can still be quite expensive. While recent work has sought to solve this problem by outsourcing proof computation to a group of workers in a privacy preserving manner, current solutions still require each worker to do work on roughly the same order as a single-prover solution. We introduce the Honest Majority Multi-Prover (HMMP) model for interactive arguments. In these arguments, we distribute prover computation among MM collaborating, but distrusting, provers. All provers receive the same inputs and have no private inputs, and we allow any t<M/2t < M/2 provers to be statically corrupted before generation of public parameters, and all communication is done via an authenticated broadcast channel. In contrast with the recent works of Ozdemir and Boneh (USENIX \u2722) and Dayama et al. (PETS \u2722), we target prover efficiency over privacy. We show that: (1) any interactive argument where the prover computation is suitably divisible into MM sub-computations can be transformed into an interactive argument in the HMMP model; and (2) arguments that are obtained via compiling polynomial interactive oracle proofs with polynomial commitment schemes admit HMMP model constructions that experience a (roughly) 1/M1/M speedup over a single-prover solution. The transformation of (1) preserves computational (knowledge) soundness, zero-knowledge, and can be made non-interactive via the Fiat-Shamir transformation. The constructions of (2) showcase that there are efficiency gains in proof distribution when privacy is not a concern

    Locally Decodable/Correctable Codes for Insertions and Deletions

    Get PDF
    Recent efforts in coding theory have focused on building codes for insertions and deletions, called insdel codes, with optimal trade-offs between their redundancy and their error-correction capabilities, as well as efficient encoding and decoding algorithms. In many applications, polynomial running time may still be prohibitively expensive, which has motivated the study of codes with super-efficient decoding algorithms. These have led to the well-studied notions of Locally Decodable Codes (LDCs) and Locally Correctable Codes (LCCs). Inspired by these notions, Ostrovsky and Paskin-Cherniavsky (Information Theoretic Security, 2015) generalized Hamming LDCs to insertions and deletions. To the best of our knowledge, these are the only known results that study the analogues of Hamming LDCs in channels performing insertions and deletions. Here we continue the study of insdel codes that admit local algorithms. Specifically, we reprove the results of Ostrovsky and Paskin-Cherniavsky for insdel LDCs using a different set of techniques. We also observe that the techniques extend to constructions of LCCs. Specifically, we obtain insdel LDCs and LCCs from their Hamming LDCs and LCCs analogues, respectively. The rate and error-correction capability blow up only by a constant factor, while the query complexity blows up by a poly log factor in the block length. Since insdel locally decodable/correctble codes are scarcely studied in the literature, we believe our results and techniques may lead to further research. In particular, we conjecture that constant-query insdel LDCs/LCCs do not exist

    Memory-Hard Puzzles in the Standard Model with Applications to Memory-Hard Functions and Resource-Bounded Locally Decodable Codes

    Get PDF
    We formally introduce, define, and construct memory-hard puzzles. Intuitively, for a difficulty parameter tt, a cryptographic puzzle is memory-hard if any parallel random access machine (PRAM) algorithm with small cumulative memory complexity (t2\ll t^2) cannot solve the puzzle; moreover, such puzzles should be both easy to generate and be solvable by a sequential RAM algorithm running in time tt. Our definitions and constructions of memory-hard puzzles are in the standard model, assuming the existence of indistinguishability obfuscation (iOi\mathcal{O}) and one-way functions (OWFs), and additionally assuming the existence of a memory-hard language. Intuitively, a language is memory-hard if it is undecidable by any PRAM algorithm with small cumulative memory complexity, while a sequential RAM algorithm running in time tt can decide the language. Our definitions and constructions of memory-hard objects are the first such definitions and constructions in the standard model without relying on idealized assumptions (such as random oracles). We give two applications which highlight the utility of memory-hard puzzles. For our first application, we give a construction of a (one-time) memory-hard function (MHF) in the standard model, using memory-hard puzzles and additionally assuming iOi\mathcal{O} and OWFs. For our second application, we show any cryptographic puzzle (e.g., memory-hard, time-lock) can be used to construct resource-bounded locally decodable codes (LDCs) in the standard model, answering an open question of Blocki, Kulkarni, and Zhou (ITC 2020). Resource-bounded LDCs achieve better rate and locality than their classical counterparts under the assumption that the adversarial channel is resource bounded (e.g., a low-depth circuit). Prior constructions of MHFs and resource-bounded LDCs required idealized primitives like random oracles

    On Soundness Notions for Interactive Oracle Proofs

    Get PDF
    Interactive oracle proofs (IOPs) (Ben-Sasson et al., TCC 2016) have emerged as a powerful model for proof systems which generalizes both Interactive Proofs (IPs) and Probabilistically Checkable Proofs (PCPs). While IOPs are not any more powerful than PCPs from a complexity theory perspective, their potential to create succinct proofs and arguments has been demonstrated by many recent constructions achieving better parameters such as total proof length, alphabet size, and query complexity. In this work, we establish new results on the relationship between various notions of soundness for IOPs. First, we formally generalize the notion of round-by-round soundness (Canetti et al., STOC 2019) and round-by-round knowledge soundness (Chiesa et al., TCC 2019). Given this generalization, we then examine its relationship to the notions of generalized special soundness (Attema et al., CRYPTO 2021) and generalized special unsoundness (Attema et al., TCC 2022). We show that: 1. generalized special soundness implies generalized round-by-round soundness; 2. generalized round-by-round knowledge soundness implies generalized special soundness; 3. generalized special soundness does not imply generalized round-by-round knowledge soundness; 4. generalized round-by-round soundness (resp., special unsoundness) is an upper bound (resp., a lower bound) on standard soundness, and that this relationship is tight when the round-by-round soundness and special unsoundness errors are equal; and 5. any special sound IOP can be transformed via (a variant of) the Fiat-Shamir transformation into a non-interactive proof that is adaptively sound in the Quantum Random Oracle Model

    Public-Coin Zero-Knowledge Arguments with (almost) Minimal Time and Space Overheads

    Get PDF
    Zero-knowledge protocols enable the truth of a mathematical statement to be certified by a verifier without revealing any other information. Such protocols are a cornerstone of modern cryptography and recently are becoming more and more practical. However, a major bottleneck in deployment is the efficiency of the prover and, in particular, the space-efficiency of the protocol. For every NP\mathsf{NP} relation that can be verified in time TT and space SS, we construct a public-coin zero-knowledge argument in which the prover runs in time Tpolylog(T)T \cdot \mathrm{polylog}(T) and space Spolylog(T)S \cdot \mathrm{polylog}(T). Our proofs have length polylog(T)\mathrm{polylog}(T) and the verifier runs in time Tpolylog(T)T \cdot \mathrm{polylog}(T) (and space polylog(T)\mathrm{polylog}(T)). Our scheme is in the random oracle model and relies on the hardness of discrete log in prime-order groups. Our main technical contribution is a new space efficient polynomial commitment scheme for multi-linear polynomials. Recall that in such a scheme, a sender commits to a given multi-linear polynomial P ⁣:FnFP \colon \mathbb{F}^n \rightarrow \mathbb{F} so that later on it can prove to a receiver statements of the form P(x)=yP(x) = y . In our scheme, which builds on the commitment schemes of Bootle et al. (Eurocrypt 2016) and Bünz et al. (S&P 2018), we assume that the sender is given multi-pass streaming access to the evaluations of PP on the Boolean hypercube and w show how to implement both the sender and receiver in roughly time 2n2^n and space nn and with communication complexity roughly nn

    Fiat-Shamir Security of FRI and Related SNARKs

    Get PDF
    We establish new results on the Fiat-Shamir (FS) security of several protocols that are widely used in practice, and we provide general tools for establishing similar results for others. More precisely, we: (1) prove the FS security of the FRI and batched FRI protocols; (2) analyze a general class of protocols, which we call δ\delta-correlated, that use low-degree proximity testing as a subroutine (this includes many Plonk-like protocols (e.g., Plonky2 and Redshift), ethSTARK, RISC Zero, etc.); and (3) prove FS security of the aforementioned Plonk-like protocols, and sketch how to prove the same for the others. We obtain our first result by analyzing the round-by-round (RBR) soundness and RBR knowledge soundness of FRI. For the second result, we prove that if a δ\delta-correlated protocol is RBR (knowledge) sound under the assumption that adversaries always send low-degree polynomials, then it is RBR (knowledge) sound in general. Equipped with this tool, we prove our third result by formally showing that Plonk-like protocols are RBR (knowledge) sound under the assumption that adversaries always send low-degree polynomials. We then outline analogous arguments for the remainder of the aforementioned protocols. To the best of our knowledge, ours is the first formal analysis of the Fiat-Shamir security of FRI and widely deployed protocols that invoke it

    P4P_4-free Partition and Cover Numbers and Application

    Get PDF
    P4P_4-free graphs-- also known as cographs, complement-reducible graphs, or hereditary Dacey graphs--have been well studied in graph theory. Motivated by computer science and information theory applications, our work encodes (flat) joint probability distributions and Boolean functions as bipartite graphs and studies bipartite P4P_4-free graphs. For these applications, the graph properties of edge partitioning and covering a bipartite graph using the minimum number of these graphs are particularly relevant. Previously, such graph properties have appeared in leakage-resilient cryptography and (variants of) coloring problems. Interestingly, our covering problem is closely related to the well-studied problem of product/Prague dimension of loopless undirected graphs, which allows us to employ algebraic lower-bounding techniques for the product/Prague dimension. We prove that computing these numbers is \npol-complete, even for bipartite graphs. We establish a connection to the (unsolved) Zarankiewicz problem to show that there are bipartite graphs with size-NN partite sets such that these numbers are at least ϵN12ϵ{\epsilon\cdot N^{1-2\epsilon}}, for ϵ{1/3,1/4,1/5,}\epsilon\in\{1/3,1/4,1/5,\dotsc\}. Finally, we accurately estimate these numbers for bipartite graphs encoding well-studied Boolean functions from circuit complexity, such as set intersection, set disjointness, and inequality. For applications in information theory and communication \& cryptographic complexity, we consider a system where a setup samples from a (flat) joint distribution and gives the participants, Alice and Bob, their portion from this joint sample. Alice and Bob\u27s objective is to non-interactively establish a shared key and extract the left-over entropy from their portion of the samples as independent private randomness. A genie, who observes the joint sample, provides appropriate assistance to help Alice and Bob with their objective. Lower bounds to the minimum size of the genie\u27s assistance translate into communication and cryptographic lower bounds. We show that (the log2\log_2 of) the P4P_4-free partition number of a graph encoding the joint distribution that the setup uses is equivalent to the size of the genie\u27s assistance. Consequently, the joint distributions corresponding to the bipartite graphs constructed above with high P4P_4-free partition numbers correspond to joint distributions requiring more assistance from the genie. As a representative application in non-deterministic communication complexity, we study the communication complexity of nondeterministic protocols augmented by access to the equality oracle at the output. We show that (the log2\log_2 of) the P4P_4-free cover number of the bipartite graph encoding a Boolean function ff is equivalent to the minimum size of the nondeterministic input required by the parties (referred to as the communication complexity of ff in this model). Consequently, the functions corresponding to the bipartite graphs with high P4P_4-free cover numbers have high communication complexity. Furthermore, there are functions with communication complexity close to the \naive protocol where the nondeterministic input reveals a party\u27s input. Finally, the access to the equality oracle reduces the communication complexity of computing set disjointness by a constant factor in contrast to the model where parties do not have access to the equality oracle. To compute the inequality function, we show an exponential reduction in the communication complexity, and this bound is optimal. On the other hand, access to the equality oracle is (nearly) useless for computing set intersection

    Visual onset expands subjective time

    Get PDF
    We report a distortion of subjective time perception in which the duration of a first interval is perceived to be longer than the succeeding interval of the same duration. The amount of time expansion depends on the onset type defining the first interval. When a stimulus appears abruptly, its duration is perceived to be longer than when it appears following a stationary array. The difference in the processing time for the stimulus onset and motion onset, measured as reaction times, agrees with the difference in time expansion. Our results suggest that initial transient responses for a visual onset serve as a temporal marker for time estimation, and a systematic change in the processing time for onsets affects perceived time
    corecore