105 research outputs found

    Pseudorandomness for Approximate Counting and Sampling

    Get PDF
    We study computational procedures that use both randomness and nondeterminism. The goal of this paper is to derandomize such procedures under the weakest possible assumptions. Our main technical contribution allows one to “boost” a given hardness assumption: We show that if there is a problem in EXP that cannot be computed by poly-size nondeterministic circuits then there is one which cannot be computed by poly-size circuits that make non-adaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize Arthur-Merlin games (i.e., show AM = NP) are in fact all equivalent. We also define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NP-witnesses. We use the “boosting” theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM. We observe that Cai's proof that S_2^P ⊆ PP⊆(NP) and the learning algorithm of Bshouty et al. can be seen as reductions to sampling that are not probabilistic. As a consequence they can be derandomized under an assumption which is weaker than the assumption that was previously known to suffice

    Logspace self-reducibility

    Get PDF
    A definition of self-reducibility is proposed to deal with logarithmic space complexity classes. A general property derived from the definition is used to prove known results comparing uniform and nonuniform complexity classes below polynomial time, and to obtain novel ones regarding nondeterministic nonuniform classes and reducibility to context-free languages.Peer ReviewedPostprint (published version

    Power of Counting by Nonuniform Families of Polynomial-Size Finite Automata

    Full text link
    Lately, there have been intensive studies on strengths and limitations of nonuniform families of promise decision problems solvable by various types of polynomial-size finite automata families, where "polynomial-size" refers to the polynomially-bounded state complexity of a finite automata family. In this line of study, we further expand the scope of these studies to families of partial counting and gap functions, defined in terms of nonuniform families of polynomial-size nondeterministic finite automata, and their relevant families of promise decision problems. Counting functions have an ability of counting the number of accepting computation paths produced by nondeterministic finite automata. With no unproven hardness assumption, we show numerous separations and collapses of complexity classes of those partial counting and gap function families and their induced promise decision problem families. We also investigate their relationships to pushdown automata families of polynomial stack-state complexity.Comment: (A4, 10pt, 21 pages) This paper corrects and extends a preliminary report published in the Proceedings of the 24th International Symposium on Fundamentals of Computation Theory (FCT 2023), Trier, Germany, September 18-24, 2023, Lecture Notes in Computer Science, vol. 14292, pp. 421-435, Springer Cham, 202

    On the Succinctness of Query Rewriting over OWL 2 QL Ontologies with Shallow Chases

    Full text link
    We investigate the size of first-order rewritings of conjunctive queries over OWL 2 QL ontologies of depth 1 and 2 by means of hypergraph programs computing Boolean functions. Both positive and negative results are obtained. Conjunctive queries over ontologies of depth 1 have polynomial-size nonrecursive datalog rewritings; tree-shaped queries have polynomial positive existential rewritings; however, in the worst case, positive existential rewritings can only be of superpolynomial size. Positive existential and nonrecursive datalog rewritings of queries over ontologies of depth 2 suffer an exponential blowup in the worst case, while first-order rewritings are superpolynomial unless NP⊆P/poly\text{NP} \subseteq \text{P}/\text{poly}. We also analyse rewritings of tree-shaped queries over arbitrary ontologies and observe that the query entailment problem for such queries is fixed-parameter tractable

    Self-Specifying Machines

    Full text link
    We study the computational power of machines that specify their own acceptance types, and show that they accept exactly the languages that \manyonesharp-reduce to NP sets. A natural variant accepts exactly the languages that \manyonesharp-reduce to P sets. We show that these two classes coincide if and only if \psone = \psnnoplusbigohone, where the latter class denotes the sets acceptable via at most one question to \sharpp followed by at most a constant number of questions to \np.Comment: 15 pages, to appear in IJFC

    New Lower Bounds and Derandomization for ACC, and a Derandomization-Centric View on the Algorithmic Method

    Get PDF
    In this paper, we obtain several new results on lower bounds and derandomization for ACC? circuits (constant-depth circuits consisting of AND/OR/MOD_m gates for a fixed constant m, a frontier class in circuit complexity): 1) We prove that any polynomial-time Merlin-Arthur proof system with an ACC? verifier (denoted by MA_{ACC?}) can be simulated by a nondeterministic proof system with quasi-polynomial running time and polynomial proof length, on infinitely many input lengths. This improves the previous simulation by [Chen, Lyu, and Williams, FOCS 2020], which requires both quasi-polynomial running time and proof length. 2) We show that MA_{ACC?} cannot be computed by fixed-polynomial-size ACC? circuits, and our hard languages are hard on a sufficiently dense set of input lengths. 3) We show that NEXP (nondeterministic exponential-time) does not have ACC? circuits of sub-half-exponential size, improving the previous sub-third-exponential size lower bound for NEXP against ACC? by [Williams, J. ACM 2014]. Combining our first and second results gives a conceptually simpler and derandomization-centric proof of the recent breakthrough result NQP := NTIME[2^polylog(n)] ? ? ACC? by [Murray and Williams, SICOMP 2020]: Instead of going through an easy witness lemma as they did, we first prove an ACC? lower bound for a subclass of MA, and then derandomize that subclass into NQP, while retaining its hardness against ACC?. Moreover, since our derandomization of MA_{ACC?} achieves a polynomial proof length, we indeed prove that nondeterministic quasi-polynomial-time with n^?(1) nondeterminism bits (denoted as NTIMEGUESS[2^polylog(n), n^?(1)]) has no poly(n)-size ACC? circuits, giving a new proof of a result by Vyas. Combining with a win-win argument based on randomized encodings from [Chen and Ren, STOC 2020], we also prove that NTIMEGUESS[2^polylog(n), n^?(1)] cannot be 1/2+1/poly(n)-approximated by poly(n)-size ACC? circuits, improving the recent strongly average-case lower bounds for NQP against ACC? by [Chen and Ren, STOC 2020]. One interesting technical ingredient behind our second result is the construction of a PSPACE-complete language that is paddable, downward self-reducible, same-length checkable, and weakly error correctable. Moreover, all its reducibility properties have corresponding AC?[2] non-adaptive oracle circuits. Our construction builds and improves upon similar constructions from [Trevisan and Vadhan, Complexity 2007] and [Chen, FOCS 2019], which all require at least TC? oracle circuits for implementing these properties
    • 

    corecore