42 research outputs found
Derandomizing Arthur-Merlin Games using Hitting Sets
We prove that AM (and hence Graph Nonisomorphism) is in NPif for some epsilon > 0, some language in NE intersection coNE requires nondeterministiccircuits of size 2^(epsilon n). This improves recent results of Arvindand K¨obler and of Klivans and Van Melkebeek who proved the sameconclusion, but under stronger hardness assumptions, namely, eitherthe existence of a language in NE intersection coNE which cannot be approximatedby nondeterministic circuits of size less than 2^(epsilon n) or the existenceof a language in NE intersection coNE which requires oracle circuits of size 2^(epsilon n)with oracle gates for SAT (satisfiability).The previous results on derandomizing AM were based on pseudorandomgenerators. In contrast, our approach is based on a strengtheningof Andreev, Clementi and Rolim's hitting set approach to derandomization.As a spin-off, we show that this approach is strong enoughto give an easy (if the existence of explicit dispersers can be assumedknown) proof of the following implication: For some epsilon > 0, if there isa language in E which requires nondeterministic circuits of size 2^(epsilon n),then P=BPP. This differs from Impagliazzo and Wigderson's theorem"only" by replacing deterministic circuits with nondeterministicones
On optimal language compression for sets in PSPACE/poly
We show that if DTIME[2^O(n)] is not included in DSPACE[2^o(n)], then, for
every set B in PSPACE/poly, all strings x in B of length n can be represented
by a string compressed(x) of length at most log(|B^{=n}|)+O(log n), such that a
polynomial-time algorithm, given compressed(x), can distinguish x from all the
other strings in B^{=n}. Modulo the O(log n) additive term, this achieves the
information-theoretic optimum for string compression. We also observe that
optimal compression is not possible for sets more complex than PSPACE/poly
because for any time-constructible superpolynomial function t, there is a set A
computable in space t(n) such that at least one string x of length n requires
compressed(x) to be of length 2 log(|A^=n|).Comment: submitted to Theory of Computing System
Recommended from our members
On Transformations of Interactive Proofs that Preserve the Prover's Complexity
Goldwasser and Sipser [GS89] proved that every interactive proof system can be transformed into a public-coin one (a.k.a., an Arthur-Merlin game). Their transformation has the drawback that the computational complexity of the prover's strategy is not preserved. We show that this is inherent, by proving that the same must be true of any transformation which only uses the original prover and verifier strategies as "black boxes". Our negative result holds even if the original proof system is restricted to be honest-verifier perfect zero knowledge and the transformation can also use the simulator as a black box.
We also examine a similar deficiency in a transformation of Fürer et al. [FGM+89] from interactive proofs to ones with perfect completeness. We argue that the increase in prover complexity incurred by their transformation is necessary, given that their construction is a black-box transformation which works regardless of the verifier's computational complexity.Engineering and Applied Science
The Power of Natural Properties as Oracles
We study the power of randomized complexity classes that are given oracle access to a natural property of Razborov and Rudich (JCSS, 1997) or its special case, the Minimal Circuit Size Problem (MCSP). We show that in a number of complexity-theoretic results that use the SAT oracle, one can use the MCSP oracle instead. For example, we show that ZPEXP^{MCSP} !subseteq P/poly, which should be contrasted with the previously known circuit lower bound ZPEXP^{NP} !subseteq P/poly. We also show that, assuming the existence of Indistinguishability Obfuscators (IO), SAT and MCSP are equivalent in the sense that one has a ZPP algorithm if and only the other one does. We interpret our results as providing some evidence that MCSP may be NP-hard under randomized polynomial-time reductions
Arithmetic Circuits and the Hadamard Product of Polynomials
Motivated by the Hadamard product of matrices we define the Hadamard product
of multivariate polynomials and study its arithmetic circuit and branching
program complexity. We also give applications and connections to polynomial
identity testing. Our main results are the following. 1. We show that
noncommutative polynomial identity testing for algebraic branching programs
over rationals is complete for the logspace counting class \ceql, and over
fields of characteristic the problem is in \ModpL/\Poly. 2.We show an
exponential lower bound for expressing the Raz-Yehudayoff polynomial as the
Hadamard product of two monotone multilinear polynomials. In contrast the
Permanent can be expressed as the Hadamard product of two monotone multilinear
formulas of quadratic size.Comment: 20 page
The Power of Quantum Fourier Sampling
A line of work initiated by Terhal and DiVincenzo and Bremner, Jozsa, and
Shepherd, shows that quantum computers can efficiently sample from probability
distributions that cannot be exactly sampled efficiently on a classical
computer, unless the PH collapses. Aaronson and Arkhipov take this further by
considering a distribution that can be sampled efficiently by linear optical
quantum computation, that under two feasible conjectures, cannot even be
approximately sampled classically within bounded total variation distance,
unless the PH collapses.
In this work we use Quantum Fourier Sampling to construct a class of
distributions that can be sampled by a quantum computer. We then argue that
these distributions cannot be approximately sampled classically, unless the PH
collapses, under variants of the Aaronson and Arkhipov conjectures.
In particular, we show a general class of quantumly sampleable distributions
each of which is based on an "Efficiently Specifiable" polynomial, for which a
classical approximate sampler implies an average-case approximation. This class
of polynomials contains the Permanent but also includes, for example, the
Hamiltonian Cycle polynomial, and many other familiar #P-hard polynomials.
Although our construction, unlike that proposed by Aaronson and Arkhipov,
likely requires a universal quantum computer, we are able to use this
additional power to weaken the conjectures needed to prove approximate sampling
hardness results
Randomness and intractability in Kolmogorov complexity
We introduce randomized time-bounded Kolmogorov complexity (rKt), a natural extension of Levin's notion [Leonid A. Levin, 1984] of Kolmogorov complexity. A string w of low rKt complexity can be decompressed from a short representation via a time-bounded algorithm that outputs w with high probability. This complexity measure gives rise to a decision problem over strings: MrKtP (The Minimum rKt Problem). We explore ideas from pseudorandomness to prove that MrKtP and its variants cannot be solved in randomized quasi-polynomial time. This exhibits a natural string compression problem that is provably intractable, even for randomized computations. Our techniques also imply that there is no n^{1 - epsilon}-approximate algorithm for MrKtP running in randomized quasi-polynomial time. Complementing this lower bound, we observe connections between rKt, the power of randomness in computing, and circuit complexity. In particular, we present the first hardness magnification theorem for a natural problem that is unconditionally hard against a strong model of computation