280 research outputs found

    ZPP is Hard Unless RP is Small

    Get PDF
    We use Lutz's resource bounded measure theory to prove that, either RP is small, or ZPP is hard. More precisely, we prove that if RP has not p-measure zero, then EXP equals ZPP on infinitely many input lengths, i.e. there are infinitely many input lengths on which ZPP is hard. Second we prove that if NP has not p-measure zero, then derandomization of AM is possible on infinitely many input length, i.e. there are infinitely many input lengths such that NP = AM. Finally we prove easiness versus randomness tradeoffs for classes in the polynomial time hierarchy. We show that it appears to every strong adversary that either, every Ʃᴾᵢ algorithm can be simulated infinitely often by a subexponential co-nondeterministic time algorithm, having oracle access to Ʃᴾᵢ -2 , or BP Ʃᴾᵢ = Ʃᴾᵢ

    Inapproximability of Combinatorial Optimization Problems

    Full text link
    We survey results on the hardness of approximating combinatorial optimization problems

    Baire categories on small complexity classes and meager–comeager laws

    Get PDF
    We introduce two resource-bounded Baire category notions on small complexity classes such as P, QUASIPOLY, SUBEXP and PSPACE and on probabilistic classes such as BPP, which differ on how the corresponding finite extension strategies are computed. We give an alternative characterization of small sets via resource-bounded Banach-Mazur games. As an application of the first notion, we show that for almost every language A (i.e. all except a meager class) computable in subexponential time, PA = BPPA. We also show that almost all languages in PSPACE do not have small nonuniform complexity. We then switch to the second Baire category notion (called locally-computable), and show that the class SPARSE is meager in P. We show that in contrast to the resource-bounded measure case, meager–comeager laws can be obtained for many standard complexity classes, relative to locally-computable Baire category on BPP and PSPACE. Another topic where locally-computable Baire categories differ from resource-bounded measure is regarding weak-completeness: we show that there is no weak-completeness notion in P based on locally-computable Baire categories, i.e. every P-weakly-complete set is complete for P. We also prove that the class of complete sets for P under Turing-logspace reductions is meager in P, if P is not equal to DSPACE (log n), and that the same holds unconditionally for QUASIPOLY. Finally we observe that locally-computable Baire categories are incomparable with all existing resource-bounded measure notions on small complexity classes, which might explain why those two settings seem to differ so fundamentally

    The Power of Natural Properties as Oracles

    Get PDF
    We study the power of randomized complexity classes that are given oracle access to a natural property of Razborov and Rudich (JCSS, 1997) or its special case, the Minimal Circuit Size Problem (MCSP). We show that in a number of complexity-theoretic results that use the SAT oracle, one can use the MCSP oracle instead. For example, we show that ZPEXP^{MCSP} !subseteq P/poly, which should be contrasted with the previously known circuit lower bound ZPEXP^{NP} !subseteq P/poly. We also show that, assuming the existence of Indistinguishability Obfuscators (IO), SAT and MCSP are equivalent in the sense that one has a ZPP algorithm if and only the other one does. We interpret our results as providing some evidence that MCSP may be NP-hard under randomized polynomial-time reductions

    Unexpected Power of Random Strings

    Get PDF

    Pre-Reduction Graph Products: Hardnesses of Properly Learning DFAs and Approximating EDP on DAGs

    Full text link
    The study of graph products is a major research topic and typically concerns the term f(G∗H)f(G*H), e.g., to show that f(G∗H)=f(G)f(H)f(G*H)=f(G)f(H). In this paper, we study graph products in a non-standard form f(R[G∗H]f(R[G*H] where RR is a "reduction", a transformation of any graph into an instance of an intended optimization problem. We resolve some open problems as applications. (1) A tight n1−ϵn^{1-\epsilon}-approximation hardness for the minimum consistent deterministic finite automaton (DFA) problem, where nn is the sample size. Due to Board and Pitt [Theoretical Computer Science 1992], this implies the hardness of properly learning DFAs assuming NP≠RPNP\neq RP (the weakest possible assumption). (2) A tight n1/2−ϵn^{1/2-\epsilon} hardness for the edge-disjoint paths (EDP) problem on directed acyclic graphs (DAGs), where nn denotes the number of vertices. (3) A tight hardness of packing vertex-disjoint kk-cycles for large kk. (4) An alternative (and perhaps simpler) proof for the hardness of properly learning DNF, CNF and intersection of halfspaces [Alekhnovich et al., FOCS 2004 and J. Comput.Syst.Sci. 2008]

    Going Meta on the Minimum Circuit Size Problem: How Hard Is It to Show How Hard Showing Hardness Is?

    Get PDF
    The Minimum Circuit Size Problem (MCSP) is a problem with a long history in computational complexity theory which has recently experienced a resurgence in attention. MCSP takes as input the description of a Boolean function f as a truth table as well as a size parameter s, and outputs whether there is a circuit that computes f of size ≤ s. It is of great interest whether MCSP is NP-complete, but there have been shown to be many technical obstacles to proving that it is. Most of these results come in the following form: If MCSP is NP-complete under a certain type of reduction, then we get a breakthrough in complexity theory that seems well beyond current techniques. These results indicate that it is unlikely we will be able to show MCSP is NP-complete under these kinds of reductions anytime soon. I seek to add to this line of work, in particular focusing on an approximation version of MCSP which is central to some of its connections to other areas of complexity theory, as well as some other variants on the problem. Let f indicate an n-ary Boolean function that thus has a truth table of size 2n. I have used the approach of Saks and Santhanam (2020) to prove that if on input f approximating MCSP within a factor superpolynomial in n is NP-complete under general polynomial-time Turing reductions, then E ⊈ P/poly (a dramatic circuit lower bound). This provides a barrier to Hirahara (2018)\u27s suggested program of using the NP-completeness of a 2(1-)n-approximation version of MCSP to show that if NP is hard in the worst case (P ≠ NP), it is also hard on average (i.e., to rule out Heuristica). However, using randomized reductions to do so remains potentially tractable. I also extend the results of Saks and Santhanam (2020) to what I define as Σk-MCSP and Q-MCSP, getting stronger circuit lower bounds, namely E ⊈ ΣkP/poly and E ⊈ PH/poly, just from their NP-hardness. Since Σk-MCSP and Q-MCSP seem to be harder problems than MCSP, at first glance one might think it would be easier to show that Σk-MCSP or Q-MCSP is NP-hard, but my results demonstrate that the opposite is true
    • …
    corecore