21 research outputs found

    Complexity Theory

    Get PDF
    [no abstract available

    Algebra in Computational Complexity

    Get PDF
    At its core, much of Computational Complexity is concerned with combinatorial objects and structures. But it has often proven true that the best way to prove things about these combinatorial objects is by establishing a connection to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test are some of the most prominent examples. The algebraic theme continues in some of the most exciting recent progress in computational complexity. There have been significant recent advances in algebraic circuit lower bounds, and the so-called "chasm at depth 4" suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model, and these are tied to central questions regarding the power of randomness in computation. Representation theory has emerged as an important tool in three separate lines of work: the "Geometric Complexity Theory" approach to P vs. NP and circuit lower bounds, the effort to resolve the complexity of matrix multiplication, and a framework for constructing locally testable codes. Coding theory has seen several algebraic innovations in recent years, including multiplicity codes, and new lower bounds. This seminar brought together researchers who are using a diverse array of algebraic methods in a variety of settings. It plays an important role in educating a diverse community about the latest new techniques, spurring further progress

    Extracting All the Randomness and Reducing the Error in Trevisan's Extractors

    Get PDF
    We give explicit constructions of extractors which work for a source of any min-entropy on strings of length n. These extractors can extract any constant fraction of the min-entropy using O(log2n) additional random bits, and can extract all the min-entropy using O(log3n) additional random bits. Both of these constructions use fewer truly random bits than any previous construction which works for all min-entropies and extracts a constant fraction of the min-entropy. We then improve our second construction and show that we can reduce the entropy loss to 2log(1/epsilon)+O(1) bits, while still using O(log3n) truly random bits (where entropy loss is defined as [(source min-entropy)+ (# truly random bits used)- (# output bits)], and epsilon is the statistical difference from uniform achieved). This entropy loss is optimal up to a constant additive term. Our extractors are obtained by observing that a weaker notion of "combinatorial design" suffices for the Nisan-Wigderson pseudorandom generator, which underlies the recent extractor of Trevisan. We give near-optimal constructions of such "weak designs" which achieve much better parameters than possible with the notion of designs used by Nisan-Wigderson and Trevisan. We also show how to improve our constructions (and Trevisan's construction) when the required statistical difference epsilon from the uniform distribution is relatively small. This improvement is obtained by using multilinear error-correcting codes over finite fields, rather than the arbitrary error-correcting codes used by Trevisan.Engineering and Applied Science

    Simple extractors via constructions of cryptographic pseudo-random generators

    Full text link
    Trevisan has shown that constructions of pseudo-random generators from hard functions (the Nisan-Wigderson approach) also produce extractors. We show that constructions of pseudo-random generators from one-way permutations (the Blum-Micali-Yao approach) can be used for building extractors as well. Using this new technique we build extractors that do not use designs and polynomial-based error-correcting codes and that are very simple and efficient. For example, one extractor produces each output bit separately in O(log⁥2n)O(\log^2 n) time. These extractors work for weak sources with min entropy λn\lambda n, for arbitrary constant λ>0\lambda > 0, have seed length O(log⁥2n)O(\log^2 n), and their output length is ≈nλ/3\approx n^{\lambda/3}.Comment: 21 pages, an extended abstract will appear in Proc. ICALP 2005; small corrections, some comments and references adde

    Polynomial-Time Pseudodeterministic Construction of Primes

    Full text link
    A randomized algorithm for a search problem is *pseudodeterministic* if it produces a fixed canonical solution to the search problem with high probability. In their seminal work on the topic, Gat and Goldwasser posed as their main open problem whether prime numbers can be pseudodeterministically constructed in polynomial time. We provide a positive solution to this question in the infinitely-often regime. In more detail, we give an *unconditional* polynomial-time randomized algorithm BB such that, for infinitely many values of nn, B(1n)B(1^n) outputs a canonical nn-bit prime pnp_n with high probability. More generally, we prove that for every dense property QQ of strings that can be decided in polynomial time, there is an infinitely-often pseudodeterministic polynomial-time construction of strings satisfying QQ. This improves upon a subexponential-time construction of Oliveira and Santhanam. Our construction uses several new ideas, including a novel bootstrapping technique for pseudodeterministic constructions, and a quantitative optimization of the uniform hardness-randomness framework of Chen and Tell, using a variant of the Shaltiel--Umans generator

    Algebraic methods in randomness and pseudorandomness

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 183-188).Algebra and randomness come together rather nicely in computation. A central example of this relationship in action is the Schwartz-Zippel lemma and its application to the fast randomized checking of polynomial identities. In this thesis, we further this relationship in two ways: (1) by compiling new algebraic techniques that are of potential computational interest, and (2) demonstrating the relevance of these techniques by making progress on several questions in randomness and pseudorandomness. The technical ingredients we introduce include: " Multiplicity-enhanced versions of the Schwartz-Zippel lenina and the "polynomial method", extending their applicability to "higher-degree" polynomials. " Conditions for polynomials to have an unusually small number of roots. " Conditions for polynomials to have an unusually structured set of roots, e.g., containing a large linear space. Our applications include: * Explicit constructions of randomness extractors with logarithmic seed and vanishing "entropy loss". " Limit laws for first-order logic augmented with the parity quantifier on random graphs (extending the classical 0-1 law). " Explicit dispersers for affine sources of imperfect randomness with sublinear entropy.by Swastik Kopparty.Ph.D

    On a New, Efficient Framework for Falsifiable Non-interactive Zero-Knowledge Arguments

    Get PDF
    Et kunnskapslĂžst bevis er en protokoll mellom en bevisfĂžrer og en attestant. BevisfĂžreren har som mĂ„l Ă„ overbevise attestanten om at visse utsagn er korrekte, som besittelse av kortnummeret til et gyldig kredittkort, uten Ă„ avslĂžre noen private opplysninger, som for eksempel kortnummeret selv. I mange anvendelser er det Ăžnskelig Ă„ bruke IIK-bevis (Ikke-interaktive kunnskapslĂžse bevis), der bevisfĂžreren produserer kun en enkelt melding som kan bekreftes av mange attestanter. En ulempe er at sikre IIK-bevis for ikke-trivielle sprĂ„k kun kan eksistere ved tilstedevĂŠrelsen av en pĂ„litelig tredjepart som beregner en felles referansestreng som blir gjort tilgjengelig for bĂ„de bevisfĂžreren og attestanten. NĂ„r ingen slik part eksisterer liter man av og til pĂ„ ikke-interaktiv vitne-uskillbarhet, en svakere form for personvern. Studiet av effektive og sikre IIK-bevis er en kritisk del av kryptografi som har blomstret opp i det siste grunnet anvendelser i blokkjeder. I den fĂžrste artikkelen konstruerer vi et nytt IIK-bevis for sprĂ„kene som bestĂ„r av alle felles nullpunkter for en endelig mengde polynomer over en endelig kropp. Vi demonstrerer nytteverdien av beviset ved flerfoldige eksempler pĂ„ anvendelser. SĂŠrlig verdt Ă„ merke seg er at det er mulig Ă„ gĂ„ nesten automatisk fra en beskrivelse av et sprĂ„k pĂ„ et hĂžyt nivĂ„ til definisjonen av IIK-beviset, som minsker behovet for dedikert kryptografisk ekspertise. I den andre artikkelen konstruerer vi et IIV-bevis ved Ă„ bruke en ny kompilator. Vi utforsker begrepet Kunnskapslydighet (et sterkere sikkerhetsbegrep enn lydighet) for noen konstruksjoner av IIK-bevis. I den tredje artikkelen utvider vi arbeidet fra den fĂžrste artikkelen ved Ă„ konstruere et nytt IIK-bevis for mengde-medlemskap som lar oss bevise at et element ligger, eller ikke ligger, i den gitte mengden. Flere nye konstruksjoner har bedre effektivitet sammenlignet med allerede kjente konstruksjoner.A zero-knowledge proof is a protocol between a prover, and a verifier. The prover aims to convince the verifier of the truth of some statement, such as possessing credentials for a valid credit card, without revealing any private information, such as the credentials themselves. In many applications, it is desirable to use NIZKs (Non-Interactive Zero Knowledge) proofs, where the prover sends outputs only a single message that can be verified by many verifiers. As a drawback, secure NIZKs for non-trivial languages can only exist in the presence of a trusted third party that computes a common reference string and makes it available to both the prover and verifier. When no such party exists, one sometimes relies on non interactive witness indistinguishability (NIWI), a weaker notion of privacy. The study of efficient and secure NIZKs is a crucial part of cryptography that has been thriving recently due to blockchain applications. In the first paper, we construct a new NIZK for the language of common zeros of a finite set of polynomials over a finite field. We demonstrate its usefulness by giving a large number of example applications. Notably, it is possible to go from a high-level language description to the definition of the NIZK almost automatically, lessening the need for dedicated cryptographic expertise. In the second paper, we construct a NIWI using a new compiler. We explore the notion of Knowledge Soundness (a security notion stronger than soundness) of some NIZK constructions. In the third paper, we extended the first paper’s work by constructing a new set (non-)membership NIZK that allows us to prove that an element belongs or does not belong to the given set. Many new constructions have better efficiency compared to already-known constructions.Doktorgradsavhandlin

    Universal semantic communication

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 325-334).Is meaningful communication possible between two intelligent parties who share no common language or background? We propose that this problem can be rigorously addressed by explicitly focusing on the goals of the communication. We propose a theoretical framework in which we can address when and to what extent such semantic communication is possible. Our starting point is a mathematical definition of a generic goal for communication, that is pursued by agents of bounded computational complexity. We then model a "lack of common language or background" by considering a class of potential partners for communication; in general, this formalism is rich enough to handle varying degrees of common language and backgrounds, but the complete lack of knowledge is modeled by simply considering the class of all partners with which some agent of similar power could achieve our goal. In this formalism, we will find that for many goals (but not all), communication without any common language or background is possible. We call the strategies for achieving goals without relying on such background universal protocols. The main intermediate notions introduced by our theory are formal notions of feedback that we call sensing. We show that sensing captures the essence of whether or not reliable universal protocols can be constructed in many natural settings of interest: we find that across settings, sensing is almost always sufficient, usually necessary, and generally a useful design principle for the construction of universal protocols. We support this last point by developing a number of examples of protocols for specific goals. Notably, we show that universal delegation of computation from a space-efficient client to a general-purpose server is possible, and we show how a variant of TCP can allow end-users on a packet network to automatically adapt to small changes in the packet format (e.g., changes in IP). The latter example above alludes to our main motivation for considering such problems, which is to develop techniques for modeling and constructing computer systems that do not require that their components strictly adhere to protocols: said differently, we hope to be able to design components that function properly with a sufficiently wide range of other components to permit a rich space of "backwards-compatible" designs for those components. We expect that in the long run, this paradigm will lead to simpler systems because "backwards compatibility" is no longer such a severe constraint, and we expect it to lead to more robust systems, partially because the components should be simpler, and partially because such components are inherently robust to deviations from any fixed protocol. Unfortunately, we find that the techniques for communication under the complete absence of any common background suffer from overhead that is too severe for such practical purposes, so we consider two natural approaches for introducing some assumed common background between components while retaining some nontrivial amount of flexibility. The first approach supposes that the designer of a component has some "belief" about what protocols would be "natural" to use to interact with other components; we show that, given sensing and some sufficient "agreement" between the beliefs of the designers of two components, the components can be made universal with some relatively modest overhead. The second approach supposes that the protocols are taken from some restricted class of functions, and we will see that for certain classes of functions and simple goals, efficient universal protocols can again be constructed from sensing. Actually, we show more: the special case of our model described in the second approach above corresponds precisely to the well-known model of mistake-bounded on-line learning first studied by Barzdirs and Frievalds, and later considered in more depth by Littlestone. This connection provides a reasonably complete picture of the conditions under which we can apply the second approach. Furthermore, it also seems that the first approach is closely related to the problem of designing good user interfaces in Human-Computer Interaction. We conclude by briefly sketching the connection, and suggest that further development of this connection may be a potentially fruitful direction for future work.by Brendan Juba.Ph.D

    Algorithms and Lower Bounds in Circuit Complexity

    Get PDF
    Computational complexity theory aims to understand what problems can be efficiently solved by computation. This thesis studies computational complexity in the model of Boolean circuits. Boolean circuits provide a basic mathematical model for computation and play a central role in complexity theory, with important applications in separations of complexity classes, algorithm design, and pseudorandom constructions. In this thesis, we investigate various types of circuit models such as threshold circuits, Boolean formulas, and their extensions, focusing on obtaining complexity-theoretic lower bounds and algorithmic upper bounds for these circuits. (1) Algorithms and lower bounds for generalized threshold circuits: We extend the study of linear threshold circuits, circuits with gates computing linear threshold functions, to the more powerful model of polynomial threshold circuits where the gates can compute polynomial threshold functions. We obtain hardness and meta-algorithmic results for this circuit model, including strong average-case lower bounds, satisfiability algorithms, and derandomization algorithms for constant-depth polynomial threshold circuits with super-linear wire complexity. (2) Algorithms and lower bounds for enhanced formulas: We investigate the model of Boolean formulas whose leaf gates can compute complex functions. In particular, we study De Morgan formulas whose leaf gates are functions with "low communication complexity". Such gates can capture a broad class of functions including symmetric functions and polynomial threshold functions. We obtain new and improved results in terms of lower bounds and meta-algorithms (satisfiability, derandomization, and learning) for such enhanced formulas. (3) Circuit lower bounds for MCSP: We study circuit lower bounds for the Minimum Circuit Size Problem (MCSP), the fundamental problem of deciding whether a given function (in the form of a truth table) can be computed by small circuits. We get new and improved lower bounds for MCSP that nearly match the best-known lower bounds against several well-studied circuit models such as Boolean formulas and constant-depth circuits
    corecore