10 research outputs found

    Master index volumes 51–60

    Get PDF

    Separating Cook Completeness from Karp-Levin Completeness Under a Worst-Case Hardness Hypothesis

    Get PDF
    We show that there is a language that is Turing complete for NP but not many-one complete for NP, under a worst-case hardness hypothesis. Our hypothesis asserts the existence of a non-deterministic, double-exponential time machine that runs in time O(2^2^n^c) (for some c > 1) accepting Sigma^* whose accepting computations cannot be computed by bounded-error, probabilistic machines running in time O(2^2^{beta * 2^n^c) (for some beta > 0). This is the first result that separates completeness notions for NP under a worst-case hardness hypothesis

    The Quantitative Structure of Exponential Time

    Get PDF
    Recent results on the internal, measure-theoretic structure of the exponential time complexity classes E = DTIME(2^linear) and E2 = DTIME(2^polynomial) are surveyed. The measure structure of these classes is seen to interact in informative ways with bi-immunity, complexity cores, polynomial-time many-one reducibility, circuit-size complexity, Kolmogorov complexity, and the density of hard languages. Possible implications for the structure of NP are also discussed

    Splittings, robustness, and structure of complete sets

    Get PDF

    The Quantitative Structure of Exponential Time

    Get PDF
    Department of Computer Science Iowa State University Ames, Iowa 50010 Recent results on the internal, measure-theoretic structure of the exponential time complexity classes linear polynomial E = DTIME(2 ) and E = DTIME(2 ) 2 are surveyed. The measure structure of these classes is seen to interact in informative ways with bi-immunity, complexity cores, polynomial-time many-one reducibility, circuit-size complexity, Kolmogorov complexity, and the density of hard languages. Possible implications for the structure of NP are also discussed

    Randomness in completeness and space-bounded computations

    Get PDF
    The study of computational complexity investigates the role of various computational resources such as processing time, memory requirements, nondeterminism, randomness, nonuniformity, etc. to solve different types of computational problems. In this dissertation, we study the role of randomness in two fundamental areas of computational complexity: NP-completeness and space-bounded computations. The concept of completeness plays an important role in defining the notion of \u27hard\u27 problems in Computer Science. Intuitively, an NP-complete problem captures the difficulty of solving any problem in NP. Polynomial-time reductions are at the heart of defining completeness. However, there is no single notion of reduction; researchers identified various polynomial-time reductions such as many-one reduction, truth-table reduction, Turing reduction, etc. Each such notion of reduction induces a notion of completeness. Finding the relationships among various NP-completeness notions is a significant open problem. Our first result is about the separation of two such polynomial-time completeness notions for NP, namely, Turing completeness and many-one completeness. This is the first result that separates completeness notions for NP under a worst-case hardness hypothesis. Our next result involves a conjecture by Even, Selman, and Yacobi [ESY84,SY82] which states that there do not exist disjoint NP-pairs all of whose separators are NP-hard via Turing reductions. If true, this conjecture implies that a certain kind of probabilistic public-key cryptosystems is not secure. The conjecture is open for 30 years. We provide evidence in support of a variant of this conjecture. We show that if there exist certain secure one-way functions, then the ESY conjecture for the bounded-truth-table reduction holds. Now we turn our attention to space-bounded computations. We investigate probabilistic space-bounded machines that are allowed to access their random bits {\em multiple times}. Our main conceptual contribution here is to establish an interesting connection between derandomization of such probabilistic space-bounded machines and the derandomization of probabilistic time-bounded machines. In particular, we show that if we can derandomize a multipass machine even with a small number of passes over random tape and only O(log^2 n) random bits to deterministic polynomial-time, then BPTIME(n) ⊆ DTIME(2^{o(n)}). Note that if we restrict the number of random bits to O(log n), then we can trivially derandomize the machine to polynomial time. Furthermore, it can be shown that if we restrict the number of passes to O(1), we can still derandomize the machine to polynomial time. Thus our result implies that any extension beyond these trivialities will lead to an unknown derandomization of BPTIME(n). Our final contribution is about the derandomization of probabilistic time-bounded machines under branching program lower bounds. The standard method of derandomizing time-bounded probabilistic machines depends on various circuit lower bounds, which are notoriously hard to prove. We show that the derandomization of low-degree polynomial identity testing, a well-known problem in co-RP, can be obtained under certain branching program lower bounds. Note that branching programs are considered weaker model of computation than the Boolean circuits

    Weak Completeness Notions for Exponential Time

    Get PDF
    Abstract The standard way for proving a problem to be intractable is to show that the problem is hard or complete for one of the standard complexity classes containing intractable problems. Lutz (1995) proposed a generalization of this approach by introducing more general weak hardness notions which still imply intractability. While a set A is hard for a class C if all problems in C can be reduced to A (by a polynomial-time bounded many-one reduction) and complete if it is hard and a member of C, Lutz proposed to call a set A weakly hard if a nonnegligible part of C can be reduced to A and to call A weakly complete if in addition A 2 C. For the exponential-time classes E = DTIME(2lin) and EXP = DTIME(2poly), Lutz formalized these ideas by introducing resource bounded (Lebesgue) measures on these classes and by saying that a subclass of E is negligible if it has measure 0 in E (and similarly for EXP). A variant of these concepts, based on resource bounded Baire category in place of measure, was introduced by Ambos-Spies (1996) where now a class is declared to be negligible if it is meager in the corresponding resource bounded sense. In our thesis we introduce and investigate new, more general, weak hardness notions for E and EXP and compare them with the above concepts from the literature. The two main new notions we introduce are nontriviality, which may be viewed as the most general weak hardness notion, and strong nontriviality. In case of E, a set A is E-nontrivial if, for any k 1, A has a predecessor in E which is 2kn complex, i.e., which can only be computed by Turing machines with run times exceeding 2kn on infinitely many inputs; and A is strongly E-nontrivial if there are predecessors which are almost everywhere 2kn complex. Besides giving examples and structural properties of the E-(non)trivial and strongly E-(non)trivial sets, we separate all weak hardness concepts for E, compare the corresponding concepts for E and EXP, answer the question whether (strongly) E-nontrivial sets are typical among the sets in E (or among the computable sets, or among all sets), investigate the degrees of the (strongly) E-nontrivial sets, and analyze the strength of these concepts if we replace the underlying p-m-reducibility by some weaker polynomial-time reducibilities
    corecore