14 research outputs found

    On optimal language compression for sets in PSPACE/poly

    Full text link
    We show that if DTIME[2^O(n)] is not included in DSPACE[2^o(n)], then, for every set B in PSPACE/poly, all strings x in B of length n can be represented by a string compressed(x) of length at most log(|B^{=n}|)+O(log n), such that a polynomial-time algorithm, given compressed(x), can distinguish x from all the other strings in B^{=n}. Modulo the O(log n) additive term, this achieves the information-theoretic optimum for string compression. We also observe that optimal compression is not possible for sets more complex than PSPACE/poly because for any time-constructible superpolynomial function t, there is a set A computable in space t(n) such that at least one string x of length n requires compressed(x) to be of length 2 log(|A^=n|).Comment: submitted to Theory of Computing System

    Arithmetic Circuits and the Hadamard Product of Polynomials

    Get PDF
    Motivated by the Hadamard product of matrices we define the Hadamard product of multivariate polynomials and study its arithmetic circuit and branching program complexity. We also give applications and connections to polynomial identity testing. Our main results are the following. 1. We show that noncommutative polynomial identity testing for algebraic branching programs over rationals is complete for the logspace counting class \ceql, and over fields of characteristic pp the problem is in \ModpL/\Poly. 2.We show an exponential lower bound for expressing the Raz-Yehudayoff polynomial as the Hadamard product of two monotone multilinear polynomials. In contrast the Permanent can be expressed as the Hadamard product of two monotone multilinear formulas of quadratic size.Comment: 20 page

    Catalytic space: non-determinism and hierarchy

    Get PDF
    Catalytic computation, defined by Buhrman, Cleve, Koucký, Loff and Speelman (STOC 2014), is a space-bounded computation where in addition to our working memory we have an exponentially larger auxiliary memory which is full; the auxiliary memory may be used throughout the computation, but it must be restored to its initial content by the end of the computation. Motivated by the surprising power of this model, we set out to study the non-deterministic version of catalytic computation. We establish that non-deterministic catalytic log-space is contained in ZPP, which is the same bound known for its deterministic counterpart, and we prove that non-deterministic catalytic space is closed under complement (under a standard derandomization assumption). Furthermore, we establish hierarchy theorems for non-deterministic and deterministic catalytic computation

    A remark on pseudo proof systems and hard instances of the satisfiability problem

    Get PDF
    We link two concepts from the literature, namely hard sequences for the satisfiability problem sat and so-called pseudo proof systems proposed for study by Krajícek. Pseudo proof systems are elements of a particular nonstandard model constructed by forcing with random variables. We show that the existence of mad pseudo proof systems is equivalent to the existence of a randomized polynomial time procedure with a highly restrictive use of randomness which produces satisfiable formulas whose satisfying assignments are probably hard to find.Peer ReviewedPostprint (published version

    The Power of Quantum Fourier Sampling

    Get PDF
    A line of work initiated by Terhal and DiVincenzo and Bremner, Jozsa, and Shepherd, shows that quantum computers can efficiently sample from probability distributions that cannot be exactly sampled efficiently on a classical computer, unless the PH collapses. Aaronson and Arkhipov take this further by considering a distribution that can be sampled efficiently by linear optical quantum computation, that under two feasible conjectures, cannot even be approximately sampled classically within bounded total variation distance, unless the PH collapses. In this work we use Quantum Fourier Sampling to construct a class of distributions that can be sampled by a quantum computer. We then argue that these distributions cannot be approximately sampled classically, unless the PH collapses, under variants of the Aaronson and Arkhipov conjectures. In particular, we show a general class of quantumly sampleable distributions each of which is based on an "Efficiently Specifiable" polynomial, for which a classical approximate sampler implies an average-case approximation. This class of polynomials contains the Permanent but also includes, for example, the Hamiltonian Cycle polynomial, and many other familiar #P-hard polynomials. Although our construction, unlike that proposed by Aaronson and Arkhipov, likely requires a universal quantum computer, we are able to use this additional power to weaken the conjectures needed to prove approximate sampling hardness results

    Robust Simulations and Significant Separations

    Get PDF
    We define and study a new notion of "robust simulations" between complexity classes which is intermediate between the traditional notions of infinitely-often and almost-everywhere, as well as a corresponding notion of "significant separations". A language L has a robust simulation in a complexity class C if there is a language in C which agrees with L on arbitrarily large polynomial stretches of input lengths. There is a significant separation of L from C if there is no robust simulation of L in C. The new notion of simulation is a cleaner and more natural notion of simulation than the infinitely-often notion. We show that various implications in complexity theory such as the collapse of PH if NP = P and the Karp-Lipton theorem have analogues for robust simulations. We then use these results to prove that most known separations in complexity theory, such as hierarchy theorems, fixed polynomial circuit lower bounds, time-space tradeoffs, and the theorems of Allender and Williams, can be strengthened to significant separations, though in each case, an almost everywhere separation is unknown. Proving our results requires several new ideas, including a completely different proof of the hierarchy theorem for non-deterministic polynomial time than the ones previously known

    PCD

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Page 96 blank. Cataloged from PDF version of thesis.Includes bibliographical references (p. 87-95).The security of systems can often be expressed as ensuring that some property is maintained at every step of a distributed computation conducted by untrusted parties. Special cases include integrity of programs running on untrusted platforms, various forms of confidentiality and side-channel resilience, and domain-specific invariants. We propose a new approach, proof-carrying data (PCD), which sidesteps the threat of faults and leakage by reasoning about properties of a computation's output data, regardless of the process that produced it. In PCD, the system designer prescribes the desired properties of a computation's outputs. Corresponding proofs are attached to every message flowing through the system, and are mutually verified by the system's components. Each such proof attests that the message's data and all of its history comply with the prescribed properties. We construct a general protocol compiler that generates, propagates, and verifies such proofs of compliance, while preserving the dynamics and efficiency of the original computation. Our main technical tool is the cryptographic construction of short non-interactive arguments (computationally-sound proofs) for statements whose truth depends on "hearsay evidence": previous arguments about other statements. To this end, we attain a particularly strong proof-of-knowledge property. We realize the above, under standard cryptographic assumptions, in a model where the prover has blackbox access to some simple functionality - essentially, a signature card.by Alessandro Chiesa.M.Eng
    corecore