700 research outputs found

    Complexity Theory

    Get PDF
    Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness, and quantum computation. Many of the developements are related to diverse mathematical ļ¬elds such as algebraic geometry, combinatorial number theory, probability theory, quantum mechanics, representation theory, and the theory of error-correcting codes

    Hardness Amplification Proofs Require Majority

    Full text link

    Derandomization with Minimal Memory Footprint

    Get PDF
    Existing proofs that deduce BPL = ? from circuit lower bounds convert randomized algorithms into deterministic algorithms with large constant overhead in space. We study space-bounded derandomization with minimal footprint, and ask what is the minimal possible space overhead for derandomization. We show that BPSPACE[S] ? DSPACE[c ? S] for c ? 2, assuming space-efficient cryptographic PRGs, and, either: (1) lower bounds against bounded-space algorithms with advice, or: (2) lower bounds against certain uniform compression algorithms. Under additional assumptions regarding the power of catalytic computation, in a new setting of parameters that was not studied before, we are even able to get c ? 1. Our results are constructive: Given a candidate hard function (and a candidate cryptographic PRG) we show how to transform the randomized algorithm into an efficient deterministic one. This follows from new PRGs and targeted PRGs for space-bounded algorithms, which we combine with novel space-efficient evaluation methods. A central ingredient in all our constructions is hardness amplification reductions in logspace-uniform TC?, that were not known before

    List Decoding with Double Samplers

    Full text link
    We develop the notion of "double samplers", first introduced by Dinur and Kaufman [Proc. 58th FOCS, 2017], which are samplers with additional combinatorial properties, and whose existence we prove using high dimensional expanders. We show how double samplers give a generic way of amplifying distance in a way that enables efficient list-decoding. There are many error correcting code constructions that achieve large distance by starting with a base code CC with moderate distance, and then amplifying the distance using a sampler, e.g., the ABNNR code construction [IEEE Trans. Inform. Theory, 38(2):509--516, 1992.]. We show that if the sampler is part of a larger double sampler then the construction has an efficient list-decoding algorithm and the list decoding algorithm is oblivious to the base code CC (i.e., it runs the unique decoder for CC in a black box way). Our list-decoding algorithm works as follows: it uses a local voting scheme from which it constructs a unique games constraint graph. The constraint graph is an expander, so we can solve unique games efficiently. These solutions are the output of the list decoder. This is a novel use of a unique games algorithm as a subroutine in a decoding procedure, as opposed to the more common situation in which unique games are used for demonstrating hardness results. Double samplers and high dimensional expanders are akin to pseudorandom objects in their utility, but they greatly exceed random objects in their combinatorial properties. We believe that these objects hold significant potential for coding theoretic constructions and view this work as demonstrating the power of double samplers in this context

    Simple extractors via constructions of cryptographic pseudo-random generators

    Full text link
    Trevisan has shown that constructions of pseudo-random generators from hard functions (the Nisan-Wigderson approach) also produce extractors. We show that constructions of pseudo-random generators from one-way permutations (the Blum-Micali-Yao approach) can be used for building extractors as well. Using this new technique we build extractors that do not use designs and polynomial-based error-correcting codes and that are very simple and efficient. For example, one extractor produces each output bit separately in O(logā”2n)O(\log^2 n) time. These extractors work for weak sources with min entropy Ī»n\lambda n, for arbitrary constant Ī»>0\lambda > 0, have seed length O(logā”2n)O(\log^2 n), and their output length is ā‰ˆnĪ»/3\approx n^{\lambda/3}.Comment: 21 pages, an extended abstract will appear in Proc. ICALP 2005; small corrections, some comments and references adde

    The Quantum PCP Conjecture

    Full text link
    The classical PCP theorem is arguably the most important achievement of classical complexity theory in the past quarter century. In recent years, researchers in quantum computational complexity have tried to identify approaches and develop tools that address the question: does a quantum version of the PCP theorem hold? The story of this study starts with classical complexity and takes unexpected turns providing fascinating vistas on the foundations of quantum mechanics, the global nature of entanglement and its topological properties, quantum error correction, information theory, and much more; it raises questions that touch upon some of the most fundamental issues at the heart of our understanding of quantum mechanics. At this point, the jury is still out as to whether or not such a theorem holds. This survey aims to provide a snapshot of the status in this ongoing story, tailored to a general theory-of-CS audience.Comment: 45 pages, 4 figures, an enhanced version of the SIGACT guest column from Volume 44 Issue 2, June 201

    Complexity Theory

    Get PDF
    Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness and randomness extraction. Many of the developments are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, representation theory, and the theory of error-correcting codes

    Reed-Muller codes for random erasures and errors

    Full text link
    This paper studies the parameters for which Reed-Muller (RM) codes over GF(2)GF(2) can correct random erasures and random errors with high probability, and in particular when can they achieve capacity for these two classical channels. Necessarily, the paper also studies properties of evaluations of multi-variate GF(2)GF(2) polynomials on random sets of inputs. For erasures, we prove that RM codes achieve capacity both for very high rate and very low rate regimes. For errors, we prove that RM codes achieve capacity for very low rate regimes, and for very high rates, we show that they can uniquely decode at about square root of the number of errors at capacity. The proofs of these four results are based on different techniques, which we find interesting in their own right. In particular, we study the following questions about E(m,r)E(m,r), the matrix whose rows are truth tables of all monomials of degree ā‰¤r\leq r in mm variables. What is the most (resp. least) number of random columns in E(m,r)E(m,r) that define a submatrix having full column rank (resp. full row rank) with high probability? We obtain tight bounds for very small (resp. very large) degrees rr, which we use to show that RM codes achieve capacity for erasures in these regimes. Our decoding from random errors follows from the following novel reduction. For every linear code CC of sufficiently high rate we construct a new code Cā€²C', also of very high rate, such that for every subset SS of coordinates, if CC can recover from erasures in SS, then Cā€²C' can recover from errors in SS. Specializing this to RM codes and using our results for erasures imply our result on unique decoding of RM codes at high rate. Finally, two of our capacity achieving results require tight bounds on the weight distribution of RM codes. We obtain such bounds extending the recent \cite{KLP} bounds from constant degree to linear degree polynomials

    Quantum learning algorithms imply circuit lower bounds

    Get PDF
    We establish the first general connection between the design of quantum algorithms and circuit lower bounds. Specifically, let C\mathfrak{C} be a class of polynomial-size concepts, and suppose that C\mathfrak{C} can be PAC-learned with membership queries under the uniform distribution with error 1/2āˆ’Ī³1/2 - \gamma by a time TT quantum algorithm. We prove that if Ī³2ā‹…Tā‰Ŗ2n/n\gamma^2 \cdot T \ll 2^n/n, then BQEāŠˆC\mathsf{BQE} \nsubseteq \mathfrak{C}, where BQE=BQTIME[2O(n)]\mathsf{BQE} = \mathsf{BQTIME}[2^{O(n)}] is an exponential-time analogue of BQP\mathsf{BQP}. This result is optimal in both Ī³\gamma and TT, since it is not hard to learn any class C\mathfrak{C} of functions in (classical) time T=2nT = 2^n (with no error), or in quantum time T=poly(n)T = \mathsf{poly}(n) with error at most 1/2āˆ’Ī©(2āˆ’n/2)1/2 - \Omega(2^{-n/2}) via Fourier sampling. In other words, even a marginal improvement on these generic learning algorithms would lead to major consequences in complexity theory. Our proof builds on several works in learning theory, pseudorandomness, and computational complexity, and crucially, on a connection between non-trivial classical learning algorithms and circuit lower bounds established by Oliveira and Santhanam (CCC 2017). Extending their approach to quantum learning algorithms turns out to create significant challenges. To achieve that, we show among other results how pseudorandom generators imply learning-to-lower-bound connections in a generic fashion, construct the first conditional pseudorandom generator secure against uniform quantum computations, and extend the local list-decoding algorithm of Impagliazzo, Jaiswal, Kabanets and Wigderson (SICOMP 2010) to quantum circuits via a delicate analysis. We believe that these contributions are of independent interest and might find other applications
    • ā€¦
    corecore