784 research outputs found
Nondeterministic functions and the existence of optimal proof systems
We provide new characterizations of two previously studied questions on nondeterministic function classes: Q1: Do nondeterministic functions admit efficient deterministic refinements? Q2: Do nondeterministic function classes contain complete functions? We show that Q1 for the class is equivalent to the question whether the standard proof system for SAT is p-optimal, and to the assumption that every optimal proof system is p-optimal. Assuming only the existence of a p-optimal proof system for SAT, we show that every set with an optimal proof system has a p-optimal proof system. Under the latter assumption, we also obtain a positive answer to Q2 for the class . An alternative view on nondeterministic functions is provided by disjoint sets and tuples. We pursue this approach for disjoint -pairs and its generalizations to tuples of sets from and with disjointness conditions of varying strength. In this way, we obtain new characterizations of Q2 for the class . Question Q1 for is equivalent to the question of whether every disjoint -pair is easy to separate. In addition, we characterize this problem by the question of whether every propositional proof system has the effective interpolation property. Again, these interpolation properties are intimately connected to disjoint -pairs, and we show how different interpolation properties can be modeled by -pairs associated with the underlying proof system
The 2CNF Boolean Formula Satisfiability Problem and the Linear Space Hypothesis
We aim at investigating the solvability/insolvability of nondeterministic
logarithmic-space (NL) decision, search, and optimization problems
parameterized by size parameters using simultaneously polynomial time and
sub-linear space on multi-tape deterministic Turing machines. We are
particularly focused on a special NL-complete problem, 2SAT---the 2CNF Boolean
formula satisfiability problem---parameterized by the number of Boolean
variables. It is shown that 2SAT with variables and clauses can be
solved simultaneously polynomial time and space for an absolute constant . This fact inspires us to
propose a new, practical working hypothesis, called the linear space hypothesis
(LSH), which states that 2SAT---a restricted variant of 2SAT in which each
variable of a given 2CNF formula appears at most 3 times in the form of
literals---cannot be solved simultaneously in polynomial time using strictly
"sub-linear" (i.e., for a certain constant
) space on all instances . An immediate consequence of
this working hypothesis is . Moreover, we use our
hypothesis as a plausible basis to lead to the insolvability of various NL
search problems as well as the nonapproximability of NL optimization problems.
For our investigation, since standard logarithmic-space reductions may no
longer preserve polynomial-time sub-linear-space complexity, we need to
introduce a new, practical notion of "short reduction." It turns out that,
parameterized with the number of variables, is
complete for a syntactically restricted version of NL, called Syntactic
NL, under such short reductions. This fact supports the legitimacy
of our working hypothesis.Comment: (A4, 10pt, 25 pages) This current article extends and corrects its
preliminary report in the Proc. of the 42nd International Symposium on
Mathematical Foundations of Computer Science (MFCS 2017), August 21-25, 2017,
Aalborg, Denmark, Leibniz International Proceedings in Informatics (LIPIcs),
Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik 2017, vol. 83, pp.
62:1-62:14, 201
Randomness in completeness and space-bounded computations
The study of computational complexity investigates the role of various computational resources such as processing time, memory requirements, nondeterminism, randomness, nonuniformity, etc. to solve different types of computational problems. In this dissertation, we study the role of randomness in two fundamental areas of computational complexity: NP-completeness and space-bounded computations.
The concept of completeness plays an important role in defining the notion of \u27hard\u27 problems in Computer Science. Intuitively, an NP-complete problem captures the difficulty of solving any problem in NP. Polynomial-time reductions are at the heart of defining completeness. However, there is no single notion of reduction; researchers identified various polynomial-time reductions such as many-one reduction, truth-table reduction, Turing reduction, etc. Each such notion of reduction induces a notion of completeness. Finding the relationships among various NP-completeness notions is a significant open problem. Our first result is about the separation of two such polynomial-time completeness notions for NP, namely, Turing completeness and many-one completeness. This is the first result that separates completeness notions for NP under a worst-case hardness hypothesis.
Our next result involves a conjecture by Even, Selman, and Yacobi [ESY84,SY82] which states that there do not exist disjoint NP-pairs all of whose separators are NP-hard via Turing reductions. If true, this conjecture implies that a certain kind of probabilistic public-key cryptosystems is not secure. The conjecture is open for 30 years. We provide evidence in support of a variant of this conjecture. We show that if there exist certain secure one-way functions, then the ESY conjecture for the bounded-truth-table reduction holds.
Now we turn our attention to space-bounded computations. We investigate probabilistic space-bounded machines that are allowed to access their random bits {\em multiple times}. Our main conceptual contribution here is to establish an interesting connection between derandomization of such probabilistic space-bounded machines and the derandomization of probabilistic time-bounded machines. In particular, we show that if we can derandomize a multipass machine even with a small number of passes over random tape and only O(log^2 n) random bits to deterministic polynomial-time, then BPTIME(n) ⊆ DTIME(2^{o(n)}). Note that if we restrict the number of random bits to O(log n), then we can trivially derandomize the machine to polynomial time. Furthermore, it can be shown that if we restrict the number of passes to O(1), we can still derandomize the machine to polynomial time. Thus our result implies that any extension beyond these trivialities will lead to an unknown derandomization of BPTIME(n).
Our final contribution is about the derandomization of probabilistic time-bounded machines under branching program lower bounds. The standard method of derandomizing time-bounded probabilistic machines depends on various circuit lower bounds, which are notoriously hard to prove. We show that the derandomization of low-degree polynomial identity testing, a well-known problem in co-RP, can be obtained under certain branching program lower bounds. Note that branching programs are considered weaker model of computation than the Boolean circuits
A Casual Tour Around a Circuit Complexity Bound
I will discuss the recent proof that the complexity class NEXP
(nondeterministic exponential time) lacks nonuniform ACC circuits of polynomial
size. The proof will be described from the perspective of someone trying to
discover it.Comment: 21 pages, 2 figures. An earlier version appeared in SIGACT News,
September 201
- …