4 research outputs found

    Algebraic Methods in Computational Complexity

    Get PDF
    Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test are some of the most prominent examples. In some of the most exciting recent progress in Computational Complexity the algebraic theme still plays a central role. There have been significant recent advances in algebraic circuit lower bounds, and the so-called chasm at depth 4 suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model (and these are tied to central questions regarding the power of randomness in computation). Also the areas of derandomization and coding theory have experimented important advances. The seminar aimed to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and the goal of the seminar was to play an important role in educating a diverse community about the latest new techniques

    Optimal Deterministic Extractors for Generalized Santha-Vazirani Sources

    Get PDF
    Let F be a finite alphabet and D be a finite set of distributions over F. A Generalized Santha-Vazirani (GSV) source of type (F, D), introduced by Beigi, Etesami and Gohari (ICALP 2015, SICOMP 2017), is a random sequence (F_1, ..., F_n) in F^n, where F_i is a sample from some distribution d in D whose choice may depend on F_1, ..., F_{i-1}. We show that all GSV source types (F, D) fall into one of three categories: (1) non-extractable; (2) extractable with error n^{-Theta(1)}; (3) extractable with error 2^{-Omega(n)}. We provide essentially randomness-optimal extraction algorithms for extractable sources. Our algorithm for category (2) sources extracts one bit with error epsilon from n = poly(1/epsilon) samples in time linear in n. Our algorithm for category (3) sources extracts m bits with error epsilon from n = O(m + log 1/epsilon) samples in time min{O(m2^m * n),n^{O(|F|)}}. We also give algorithms for classifying a GSV source type (F, D): Membership in category (1) can be decided in NP, while membership in category (3) is polynomial-time decidable
    corecore