70 research outputs found

    Expander Graphs and Coding Theory

    Get PDF
    Expander graphs are highly connected sparse graphs which lie at the interface of many diļ¬€erent ļ¬elds of study. For example, they play important roles in prime sieves, cryptography, compressive sensing, metric embedding, and coding theory to name a few. This thesis focuses on the connections between sparse graphs and coding theory. It is a major challenge to explicitly construct sparse graphs with good expansion properties, for example Ramanujan graphs. Nevertheless, explicit constructions do exist, and in this thesis, we survey many of these constructions up to this point including a new construction which slightly improves on an earlier edge expansion bound. The edge expansion of a graph is crucial in applications, and it is well-known that computing the edge expansion of an arbitrary graph is NP-hard. We present a simple algo-rithm for approximating the edge expansion of a graph using linear programming techniques. While Andersen and Lang (2008) proved similar results, our analysis attacks the problem from a diļ¬€erent vantage point and was discovered independently. The main contribution in the thesis is a new result in fast decoding for expander codes. Current algorithms in the literature can decode a constant fraction of errors in linear time but require that the underlying graphs have vertex expansion at least 1/2. We present a fast decoding algorithm that can decode a constant fraction of errors in linear time given any vertex expansion (even if it is much smaller than 1/2) by using a stronger local code, and the fraction of errors corrected almost doubles that of Viderman (2013)

    Samplers and Extractors for Unbounded Functions

    Get PDF
    Blasiok (SODA\u2718) recently introduced the notion of a subgaussian sampler, defined as an averaging sampler for approximating the mean of functions f from {0,1}^m to the real numbers such that f(U_m) has subgaussian tails, and asked for explicit constructions. In this work, we give the first explicit constructions of subgaussian samplers (and in fact averaging samplers for the broader class of subexponential functions) that match the best known constructions of averaging samplers for [0,1]-bounded functions in the regime of parameters where the approximation error epsilon and failure probability delta are subconstant. Our constructions are established via an extension of the standard notion of randomness extractor (Nisan and Zuckerman, JCSS\u2796) where the error is measured by an arbitrary divergence rather than total variation distance, and a generalization of Zuckerman\u27s equivalence (Random Struct. Alg.\u2797) between extractors and samplers. We believe that the framework we develop, and specifically the notion of an extractor for the Kullback-Leibler (KL) divergence, are of independent interest. In particular, KL-extractors are stronger than both standard extractors and subgaussian samplers, but we show that they exist with essentially the same parameters (constructively and non-constructively) as standard extractors

    Lossless Dimension Expanders via Linearized Polynomials and Subspace Designs

    Get PDF
    For a vector space F^n over a field F, an (eta,beta)-dimension expander of degree d is a collection of d linear maps Gamma_j : F^n -> F^n such that for every subspace U of F^n of dimension at most eta n, the image of U under all the maps, sum_{j=1}^d Gamma_j(U), has dimension at least beta dim(U). Over a finite field, a random collection of d = O(1) maps Gamma_j offers excellent "lossless" expansion whp: beta ~~ d for eta >= Omega(1/d). When it comes to a family of explicit constructions (for growing n), however, achieving even modest expansion factor beta = 1+epsilon with constant degree is a non-trivial goal. We present an explicit construction of dimension expanders over finite fields based on linearized polynomials and subspace designs, drawing inspiration from recent progress on list-decoding in the rank-metric. Our approach yields the following: - Lossless expansion over large fields; more precisely beta >= (1-epsilon)d and eta >= (1-epsilon)/d with d = O_epsilon(1), when |F| >= Omega(n). - Optimal up to constant factors expansion over fields of arbitrarily small polynomial size; more precisely beta >= Omega(delta d) and eta >= Omega(1/(delta d)) with d=O_delta(1), when |F| >= n^{delta}. Previously, an approach reducing to monotone expanders (a form of vertex expansion that is highly non-trivial to establish) gave (Omega(1),1+Omega(1))-dimension expanders of constant degree over all fields. An approach based on "rank condensing via subspace designs" led to dimension expanders with beta >rsim sqrt{d} over large fields. Ours is the first construction to achieve lossless dimension expansion, or even expansion proportional to the degree

    Affine extractors over large fields with exponential error

    Full text link
    We describe a construction of explicit affine extractors over large finite fields with exponentially small error and linear output length. Our construction relies on a deep theorem of Deligne giving tight estimates for exponential sums over smooth varieties in high dimensions.Comment: To appear in Comput. Comple
    • ā€¦
    corecore