17 research outputs found

    Better lossless condensers through derandomized curve samplers

    Get PDF
    Lossless condensers are unbalanced expander graphs, with expansion close to optimal. Equivalently, they may be viewed as functions that use a short random seed to map a source on n bits to a source on many fewer bits while preserving all of the min-entropy. It is known how to build lossless condensers when the graphs are slightly unbalanced in the work of M. Capalbo et al. (2002). The highly unbalanced case is also important but the only known construction does not condense the source well. We give explicit constructions of lossless condensers with condensing close to optimal, and using near-optimal seed length. Our main technical contribution is a randomness-efficient method for sampling FD (where F is a field) with low-degree curves. This problem was addressed before in the works of E. Ben-Sasson et al. (2003) and D. Moshkovitz and R. Raz (2006) but the solutions apply only to degree one curves, i.e., lines. Our technique is new and elegant. We use sub-sampling and obtain our curve samplers by composing a sequence of low-degree manifolds, starting with high-dimension, low-degree manifolds and proceeding through lower and lower dimension manifolds with (moderately) growing degrees, until we finish with dimension-one, low-degree manifolds, i.e., curves. The technique may be of independent interest

    Local list decoding of homomorphisms

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (leaves 47-49).We investigate the local-list decodability of codes whose codewords are group homomorphisms. The study of such codes was intiated by Goldreich and Levin with the seminal work on decoding the Hadamard code. Many of the recent abstractions of their initial algorithm focus on Locally Decodable Codes (LDC's) over finite fields. We derive our algorithmic approach from the list decoding of the Reed-Muller code over finite fields proposed by Sudan, Trevisan and Vadhan. Given an abelian group G and a fixed abelian group H, we give combinatorial bounds on the number of homomorphisms that have agreement 6 with an oracle-access function f : G --> H. Our bounds are polynomial in , where the degree of the polynomial depends on H. Also, depends on the distance parameter of the code, namely we consider to be slightly greater than 1-minimum distance. Furthermore, we give a local-list decoding algorithm for the homomorphisms that agree on a 3 fraction of the domain with a function f, the running time of which is poly(1/e, log G).by Elena Grigorescu.S.M

    Randomness-Efficient Curve Samplers

    Get PDF
    Curve samplers are sampling algorithms that proceed by viewing the domain as a vector space over a finite field, and randomly picking a low-degree curve in it as the sample. Curve samplers exhibit a nice property besides the sampling property: the restriction of low-degree polynomials over the domain to the sampled curve is still low-degree. This property is often used in combination with the sampling property and has found many applications, including PCP constructions, local decoding of codes, and algebraic PRG constructions. The randomness complexity of curve samplers is a crucial parameter for its applications. It is known that (non-explicit) curve samplers using O(logN + log(1/δ)) random bits exist, where N is the domain size and δ is the confidence error. The question of explicitly constructing randomness-efficient curve samplers was first raised in [TSU06] they obtained curve samplers with near-optimal randomness complexity. We present an explicit construction of low-degree curve samplers with optimal randomness complexity (up to a constant factor), sampling curves of degree (m log_q (1/δ))^(O(1)) in F^m_q. Our construction is a delicate combination of several components, including extractor machinery, limited independence, iterated sampling, and list-recoverable codes

    Sublinear-Time Computation in the Presence of Online Erasures

    Get PDF
    We initiate the study of sublinear-time algorithms that access their input via an online adversarial erasure oracle. After answering each query to the input object, such an oracle can erase tt input values. Our goal is to understand the complexity of basic computational tasks in extremely adversarial situations, where the algorithm's access to data is blocked during the execution of the algorithm in response to its actions. Specifically, we focus on property testing in the model with online erasures. We show that two fundamental properties of functions, linearity and quadraticity, can be tested for constant tt with asymptotically the same complexity as in the standard property testing model. For linearity testing, we prove tight bounds in terms of tt, showing that the query complexity is Θ(logt)\Theta(\log t). In contrast to linearity and quadraticity, some other properties, including sortedness and the Lipschitz property of sequences, cannot be tested at all, even for t=1t=1. Our investigation leads to a deeper understanding of the structure of violations of linearity and other widely studied properties. We also consider implications of our results for algorithms that are resilient to online adversarial corruptions instead of erasures

    Boolean functions on high-dimensional expanders

    Full text link
    We initiate the study of Boolean function analysis on high-dimensional expanders. We give a random-walk based definition of high-dimensional expansion, which coincides with the earlier definition in terms of two-sided link expanders. Using this definition, we describe an analog of the Fourier expansion and the Fourier levels of the Boolean hypercube for simplicial complexes. Our analog is a decomposition into approximate eigenspaces of random walks associated with the simplicial complexes. Our random-walk definition and the decomposition have the additional advantage that they extend to the more general setting of posets, encompassing both high-dimensional expanders and the Grassmann poset, which appears in recent work on the unique games conjecture. We then use this decomposition to extend the Friedgut-Kalai-Naor theorem to high-dimensional expanders. Our results demonstrate that a constant-degree high-dimensional expander can sometimes serve as a sparse model for the Boolean slice or hypercube, and quite possibly additional results from Boolean function analysis can be carried over to this sparse model. Therefore, this model can be viewed as a derandomization of the Boolean slice, containing only X(k1)=O(n)|X(k-1)|=O(n) points in contrast to (nk)\binom{n}{k} points in the (k)(k)-slice (which consists of all nn-bit strings with exactly kk ones).Comment: 48 pages, Extended version of the prior submission, with more details of expanding posets (eposets

    Sub-Constant Error Low Degree Test of Almost Linear Size

    No full text
    Given a function f: � m → � over a finite field �, a low degree tester tests its agreement with an m-variate polynomial of total degree at most d over �. The tester is usually given access to an oracle A providing the supposed restrictions of f to affine subspaces of constant dimension (e.g., lines, planes, etc.). The tester makes very few (probabilistic) queries to f and to A (say, one query to f and one query to A), and decides whether to accept or reject based on the replies. We wish to minimize two parameters of a tester: its error and its size. The error bounds the probability that the tester accepts although the function is far from a low degree polynomial. The size is the number of bits required to write the oracle replies on all possible tester’s queries. Low degree testing is a central ingredient in most constructions of probabilistically checkable proofs (P CP s) and locally testable codes (LT Cs). The error of the low degree tester is related to the soundness of the P CP and its size is related to the size of the P CP (or the length of the LT C). We design and analyze new low degree testers that have both sub-constant error o(1) and almost-linear size n 1+o(1) (where n = |� | m). Previous constructions of sub-constant error testers had polynomial size [3, 16]. These testers enabled the construction of P CP s with sub-constant soundness, but polynomial size [3, 16, 9]. Previous constructions of almost-linear size testers obtained only constant error [13, 7]. These testers were used to construct almost-linear size LT Cs and almost-linear size P CP s with constant soundnes

    Local decoding and testing for homomorphisms

    Get PDF
    Abstract Locally decodable codes (LDCs) have played a central role in many recent results in theoretical computer science. The role of finite fields, and in particular, low-degree polynomials over finite fields, in the construction of these objects is well studied. However the role of group homomorphisms in the construction of such codes is not as widely studied. Here we initiate a systematic study of local decoding of codes based on group homomorphisms. We give an efficient list decoder for the class of homomorphisms from any abelian group G to a fixed abelian group H. The running time of this algorithm is bounded by a polynomial in log |G| and an agreement parameter, where the degree of the polynomial depends on H. Central to this algorithmic result is a combinatorial result bounding the number of homomorphisms that have large agreement with any function from G to H. Our results give a new generalization of the classical work of Goldreich and Levin, and give new abstractions of the list decoder of Sudan, Trevisan and Vadhan. As a by-product we also derive a simple(r) proof of the local testability (beyond the Blum-Luby-Rubinfeld bounds) of homomorphisms mapping Z n p to Z p , first shown by M. Kiwi

    Rigid Matrices From Rectangular PCPs

    Full text link
    We introduce a variant of PCPs, that we refer to as rectangular PCPs, wherein proofs are thought of as square matrices, and the random coins used by the verifier can be partitioned into two disjoint sets, one determining the row of each query and the other determining the column. We construct PCPs that are efficient, short, smooth and (almost-)rectangular. As a key application, we show that proofs for hard languages in NTIME(2n)NTIME(2^n), when viewed as matrices, are rigid infinitely often. This strengthens and simplifies a recent result of Alman and Chen [FOCS, 2019] constructing explicit rigid matrices in FNP. Namely, we prove the following theorem: - There is a constant δ(0,1)\delta \in (0,1) such that there is an FNP-machine that, for infinitely many NN, on input 1N1^N outputs N×NN \times N matrices with entries in F2\mathbb{F}_2 that are δN2\delta N^2-far (in Hamming distance) from matrices of rank at most 2logN/Ω(loglogN)2^{\log N/\Omega(\log \log N)}. Our construction of rectangular PCPs starts with an analysis of how randomness yields queries in the Reed--Muller-based outer PCP of Ben-Sasson, Goldreich, Harsha, Sudan and Vadhan [SICOMP, 2006; CCC, 2005]. We then show how to preserve rectangularity under PCP composition and a smoothness-inducing transformation. This warrants refined and stronger notions of rectangularity, which we prove for the outer PCP and its transforms.Comment: 36 pages, 3 figure

    NEEXP is Contained in MIP*

    Get PDF
    We study multiprover interactive proof systems. The power of classical multiprover interactive proof systems, in which the provers do not share entanglement, was characterized in a famous work by Babai, Fortnow, and Lund (Computational Complexity 1991), whose main result was the equality MIP = NEXP. The power of quantum multiprover interactive proof systems, in which the provers are allowed to share entanglement, has proven to be much more difficult to characterize. The best known lower-bound on MIP* is NEXP ⊆ MIP*, due to Ito and Vidick (FOCS 2012). As for upper bounds, MIP* could be as large as RE, the class of recursively enumerable languages. The main result of this work is the inclusion of NEEXP = NTIME[2^(2poly(n))] ⊆ MIP*. This is an exponential improvement over the prior lower bound and shows that proof systems with entangled provers are at least exponentially more powerful than classical provers. In our protocol the verifier delegates a classical, exponentially large MIP protocol for NEEXP to two entangled provers: the provers obtain their exponentially large questions by measuring their shared state, and use a classical PCP to certify the correctness of their exponentially-long answers. For the soundness of our protocol, it is crucial that each player should not only sample its own question correctly but also avoid performing measurements that would reveal the other player's sampled question. We ensure this by commanding the players to perform a complementary measurement, relying on the Heisenberg uncertainty principle to prevent the forbidden measurements from being performed
    corecore