27 research outputs found
Erasures vs. Errors in Local Decoding and Property Testing
We initiate the study of the role of erasures in local decoding and use our understanding to prove a separation between erasure-resilient and tolerant property testing. Local decoding in the presence of errors has been extensively studied, but has not been considered explicitly in the presence of erasures.
Motivated by applications in property testing, we begin our investigation with local list decoding in the presence of erasures. We prove an analog of a famous result of Goldreich and Levin on local list decodability of the Hadamard code. Specifically, we show that the Hadamard code is locally list decodable in the presence of a constant fraction of erasures, arbitrary close to 1, with list sizes and query complexity better than in the Goldreich-Levin theorem. We use this result to exhibit a property which is testable with a number of queries independent of the length of the input in the presence of erasures, but requires a number of queries that depends on the input length, n, for tolerant testing. We further study approximate locally list decodable codes that work against erasures and use them to strengthen our separation by constructing a property which is testable with a constant number of queries in the presence of erasures, but requires n^{Omega(1)} queries for tolerant testing.
Next, we study the general relationship between local decoding in the presence of errors and in the presence of erasures. We observe that every locally (uniquely or list) decodable code that works in the presence of errors also works in the presence of twice as many erasures (with the same parameters up to constant factors). We show that there is also an implication in the other direction for locally decodable codes (with unique decoding): specifically, that the existence of a locally decodable code that works in the presence of erasures implies the existence of a locally decodable code that works in the presence of errors and has related parameters. However, it remains open whether there is an implication in the other direction for locally list decodable codes. We relate this question to other open questions in local decoding
Proving as Fast as Computing: Succinct Arguments with Constant Prover Overhead
Succinct arguments are proof systems that allow a powerful, but untrusted, prover to convince a weak verifier that an input belongs to a language , with communication that is much shorter than the witness. Such arguments, which grew out of the theory literature, are now drawing immense interest also in practice, where a key bottleneck that has arisen is the high computational cost of \emph{proving} correctness.
In this work we address this problem by constructing succinct arguments for general computations, expressed as Boolean circuits (of bounded fan-in), with a \emph{strictly linear} size prover. The soundness error of the protocol is an arbitrarily small constant. Prior to this work, succinct arguments were known with a \emph{quasi-}linear size prover for general Boolean circuits or with linear-size only for arithmetic circuits, defined over large finite fields.
In more detail, for every Boolean circuit , we construct an -round argument-system in which the prover can be implemented by a size Boolean circuit (given as input both the instance and the witness ), with arbitrarily small constant soundness error and using communication, where denotes the security parameter. The verifier can be implemented by a size circuit following a size private pre-processing step, or, alternatively, by using a purely public-coin protocol (with no pre-processing) with a size verifier. The protocol can be made zero-knowledge using standard techniques (and with similar parameters). The soundness of our protocol is computational and relies on the existence of collision resistant hash functions that can be computed by linear-size circuits, such as those proposed by Applebaum et al. (ITCS, 2017).
At the heart of our construction is a new information-theoretic \emph{interactive oracle proof} (IOP), an interactive analog of a PCP, for circuit satisfiability, with constant prover overhead. The improved efficiency of our IOP is obtained by bypassing a barrier faced by prior IOP constructions, which needed to (either explicitly or implicitly) encode the entire computation using a multiplication code
Sampling-based proofs of almost-periodicity results and algorithmic applications
We give new combinatorial proofs of known almost-periodicity results for
sumsets of sets with small doubling in the spirit of Croot and Sisask, whose
almost-periodicity lemma has had far-reaching implications in additive
combinatorics. We provide an alternative (and L^p-norm free) point of view,
which allows for proofs to easily be converted to probabilistic algorithms that
decide membership in almost-periodic sumsets of dense subsets of F_2^n.
As an application, we give a new algorithmic version of the quasipolynomial
Bogolyubov-Ruzsa lemma recently proved by Sanders. Together with the results by
the last two authors, this implies an algorithmic version of the quadratic
Goldreich-Levin theorem in which the number of terms in the quadratic Fourier
decomposition of a given function is quasipolynomial in the error parameter,
compared with an exponential dependence previously proved by the authors. It
also improves the running time of the algorithm to have quasipolynomial
dependence instead of an exponential one.
We also give an application to the problem of finding large subspaces in
sumsets of dense sets. Green showed that the sumset of a dense subset of F_2^n
contains a large subspace. Using Fourier analytic methods, Sanders proved that
such a subspace must have dimension bounded below by a constant times the
density times n. We provide an alternative (and L^p norm-free) proof of a
comparable bound, which is analogous to a recent result of Croot, Laba and
Sisask in the integers.Comment: 28 page
Simple Constructions of Unique Neighbor Expanders from Error-correcting Codes
In this note, we give very simple constructions of unique neighbor expander
graphs starting from spectral or combinatorial expander graphs of mild
expansion. These constructions and their analysis are simple variants of the
constructions of LDPC error-correcting codes from expanders, given by
Sipser-Spielman\cite{SS96} (and Tanner\cite{Tanner81}), and their analysis. We
also show how to obtain expanders with many unique neighbors using similar
ideas.
There were many exciting results on this topic recently, starting with
Asherov-Dinur\cite{AD23} and Hsieh-McKenzie-Mohanty-Paredes\cite{HMMP23}, who
gave a similar construction of unique neighbor expander graphs, but using more
sophisticated ingredients (such as almost-Ramanujan graphs) and a more involved
analysis. Subsequent beautiful works of Cohen-Roth-TaShma\cite{CRT23} and
Golowich\cite{Golowich23} gave even stronger objects (lossless expanders), but
also using sophisticated ingredients.
The main contribution of this work is that we get much more elementary
constructions of unique neighbor expanders and with a simpler analysis
Local Proofs Approaching the Witness Length
Interactive oracle proofs (IOPs) are a hybrid between interactive proofs and PCPs. In an IOP the prover is allowed to interact with a verifier (like in an interactive proof) by sending relatively long messages to the verifier, who in turn is only allowed to query a few of the bits that were sent (like in a PCP).
In this work we construct, for a large class of NP relations, IOPs in which the communication complexity approaches the witness length. More precisely, for any NP relation for which membership can be decided in polynomial-time and bounded polynomial space (e.g., SAT, Hamiltonicity, Clique, Vertex-Cover, etc.) and for any constant , we construct an IOP with communication complexity , where is the original witness length. The number of rounds as well as the number of queries made by the IOP verifier are constant.
This result improves over prior works on short IOPs/PCPs in two ways. First, the communication complexity in these short IOPs is proportional to the complexity of verifying the NP witness, which can be polynomially larger than the witness size. Second, even ignoring the difference between witness length and non-deterministic verification time, prior works incur (at the very least) a large constant multiplicative overhead to the communication complexity.
In particular, as a special case, we also obtain an IOP for Circuit-SAT with rate approaching 1: the communication complexity is , for circuits of size and any constant . This improves upon the prior state-of-the-art work of Ben Sasson et al. (ICALP, 2017) who construct an IOP for CircuitSAT with communication length for a large (unspecified) constant .
Our proof leverages recent constructions of high-rate locally testable tensor codes. In particular, we bypass the barrier imposed by the low rate of multiplication codes (e.g., Reed-Solomon, Reed-Muller or AG codes) - a core component in all known short PCP/IOP constructions