9 research outputs found

    A short note on the joint entropy of n/2-wise independence

    Full text link
    In this note, we prove a tight lower bound on the joint entropy of nn unbiased Bernoulli random variables which are n/2n/2-wise independent. For general kk-wise independence, we give new lower bounds by adapting Navon and Samorodnitsky's Fourier proof of the `LP bound' on error correcting codes. This counts as partial progress on a problem asked by Gavinsky and Pudl\'ak.Comment: 6 pages, some errors fixe

    An entropy lower bound for non-malleable extractors

    Get PDF
    A (k, ε)-non-malleable extractor is a function nmExt : {0, 1} n × {0, 1} d → {0, 1} that takes two inputs, a weak source X ~ {0, 1} n of min-entropy k and an independent uniform seed s E {0, 1} d , and outputs a bit nmExt(X, s) that is ε-close to uniform, even given the seed s and the value nmExt(X, s') for an adversarially chosen seed s' ≠ s. Dodis and Wichs (STOC 2009) showed the existence of (k, ε)-non-malleable extractors with seed length d = log(n - k - 1) + 2 log(1/ε) + 6 that support sources of min-entropy k > log(d) + 2 log(1/ε) + 8. We show that the foregoing bound is essentially tight, by proving that any (k, ε)-non-malleable extractor must satisfy the min-entropy bound k > log(d) + 2 log(1/ε) - log log(1/ε) - C for an absolute constant C. In particular, this implies that non-malleable extractors require min-entropy at least Ω(loglog(n)). This is in stark contrast to the existence of strong seeded extractors that support sources of min-entropy k = O(log(1/ε)). Our techniques strongly rely on coding theory. In particular, we reveal an inherent connection between non-malleable extractors and error correcting codes, by proving a new lemma which shows that any (k, ε)-non-malleable extractor with seed length d induces a code C ⊆ {0,1} 2k with relative distance 1/2 - 2ε and rate d-1/2k

    Coding-Theoretic Methods for Sparse Recovery

    Full text link
    We review connections between coding-theoretic objects and sparse learning problems. In particular, we show how seemingly different combinatorial objects such as error-correcting codes, combinatorial designs, spherical codes, compressed sensing matrices and group testing designs can be obtained from one another. The reductions enable one to translate upper and lower bounds on the parameters attainable by one object to another. We survey some of the well-known reductions in a unified presentation, and bring some existing gaps to attention. New reductions are also introduced; in particular, we bring up the notion of minimum "L-wise distance" of codes and show that this notion closely captures the combinatorial structure of RIP-2 matrices. Moreover, we show how this weaker variation of the minimum distance is related to combinatorial list-decoding properties of codes.Comment: Added Lemma 34 in the first revision. Original version in Proceedings of the Allerton Conference on Communication, Control and Computing, September 201

    Estimating the Sizes of Binary Error-Correcting Constrained Codes

    Full text link
    In this paper, we study binary constrained codes that are resilient to bit-flip errors and erasures. In our first approach, we compute the sizes of constrained subcodes of linear codes. Since there exist well-known linear codes that achieve vanishing probabilities of error over the binary symmetric channel (which causes bit-flip errors) and the binary erasure channel, constrained subcodes of such linear codes are also resilient to random bit-flip errors and erasures. We employ a simple identity from the Fourier analysis of Boolean functions, which transforms the problem of counting constrained codewords of linear codes to a question about the structure of the dual code. We illustrate the utility of our method in providing explicit values or efficient algorithms for our counting problem, by showing that the Fourier transform of the indicator function of the constraint is computable, for different constraints. Our second approach is to obtain good upper bounds, using an extension of Delsarte's linear program (LP), on the largest sizes of constrained codes that can correct a fixed number of combinatorial errors or erasures. We observe that the numerical values of our LP-based upper bounds beat the generalized sphere packing bounds of Fazeli, Vardy, and Yaakobi (2015).Comment: 51 pages, 2 figures, 9 tables, to be submitted to the IEEE Journal on Selected Areas in Information Theor
    corecore