1,569 research outputs found
Delta-Complete Decision Procedures for Satisfiability over the Reals
We introduce the notion of "\delta-complete decision procedures" for solving
SMT problems over the real numbers, with the aim of handling a wide range of
nonlinear functions including transcendental functions and solutions of
Lipschitz-continuous ODEs. Given an SMT problem \varphi and a positive rational
number \delta, a \delta-complete decision procedure determines either that
\varphi is unsatisfiable, or that the "\delta-weakening" of \varphi is
satisfiable. Here, the \delta-weakening of \varphi is a variant of \varphi that
allows \delta-bounded numerical perturbations on \varphi. We prove the
existence of \delta-complete decision procedures for bounded SMT over reals
with functions mentioned above. For functions in Type 2 complexity class C,
under mild assumptions, the bounded \delta-SMT problem is in NP^C.
\delta-Complete decision procedures can exploit scalable numerical methods for
handling nonlinearity, and we propose to use this notion as an ideal
requirement for numerically-driven decision procedures. As a concrete example,
we formally analyze the DPLL framework, which integrates Interval
Constraint Propagation (ICP) in DPLL(T), and establish necessary and sufficient
conditions for its \delta-completeness. We discuss practical applications of
\delta-complete decision procedures for correctness-critical applications
including formal verification and theorem proving.Comment: A shorter version appears in IJCAR 201
Inapproximability of Combinatorial Optimization Problems
We survey results on the hardness of approximating combinatorial optimization
problems
Parallelism with limited nondeterminism
Computational complexity theory studies which computational problems can be solved with limited access to resources. The past fifty years have seen a focus on the relationship between intractable problems and efficient algorithms. However, the relationship between inherently sequential problems and highly parallel algorithms has not been as well studied. Are there efficient but inherently sequential problems that admit some relaxed form of highly parallel algorithm? In this dissertation, we develop the theory of structural complexity around this relationship for three common types of computational problems.
Specifically, we show tradeoffs between time, nondeterminism, and parallelizability. By clearly defining the notions and complexity classes that capture our intuition for parallelizable and sequential problems, we create a comprehensive framework for rigorously proving parallelizability and non-parallelizability of computational problems. This framework provides the means to prove whether otherwise tractable problems can be effectively parallelized, a need highlighted by the current growth of multiprocessor systems. The views adopted by this dissertation—alternate approaches to solving sequential problems using approximation, limited nondeterminism, and parameterization—can be applied practically throughout computer science
Simplest random K-satisfiability problem
We study a simple and exactly solvable model for the generation of random
satisfiability problems. These consist of random boolean constraints
which are to be satisfied simultaneously by logical variables. In
statistical-mechanics language, the considered model can be seen as a diluted
p-spin model at zero temperature. While such problems become extraordinarily
hard to solve by local search methods in a large region of the parameter space,
still at least one solution may be superimposed by construction. The
statistical properties of the model can be studied exactly by the replica
method and each single instance can be analyzed in polynomial time by a simple
global solution method. The geometrical/topological structures responsible for
dynamic and static phase transitions as well as for the onset of computational
complexity in local search method are thoroughly analyzed. Numerical analysis
on very large samples allows for a precise characterization of the critical
scaling behaviour.Comment: 14 pages, 5 figures, to appear in Phys. Rev. E (Feb 2001). v2: minor
errors and references correcte
The Power of Unentanglement
The class QMA(k). introduced by Kobayashi et al., consists of all languages that can be verified using k unentangled quantum proofs. Many of the simplest questions about this class have remained embarrassingly open: for example, can we give any evidence that k quantum proofs are more powerful than one? Does QMA(k) = QMA(2) for k ≥ 2? Can QMA(k) protocols be amplified to exponentially small error?
In this paper, we make progress on all of the above questions.
* We give a protocol by which a verifier can be convinced that a 3SAT formula of size m is satisfiable, with constant soundness, given Õ (√m) unentangled quantum witnesses with O(log m) qubits each. Our protocol relies on the existence of very short PCPs.
* We show that assuming a weak version of the Additivity Conjecture from quantum information theory, any QMA(2) protocol can be amplified to exponentially small error, and QMA(k) = QMA(2) for all k ≥ 2.
* We prove the nonexistence of "perfect disentanglers" for simulating multiple Merlins with one
Constraint LTL Satisfiability Checking without Automata
This paper introduces a novel technique to decide the satisfiability of
formulae written in the language of Linear Temporal Logic with Both future and
past operators and atomic formulae belonging to constraint system D (CLTLB(D)
for short). The technique is based on the concept of bounded satisfiability,
and hinges on an encoding of CLTLB(D) formulae into QF-EUD, the theory of
quantifier-free equality and uninterpreted functions combined with D. Similarly
to standard LTL, where bounded model-checking and SAT-solvers can be used as an
alternative to automata-theoretic approaches to model-checking, our approach
allows users to solve the satisfiability problem for CLTLB(D) formulae through
SMT-solving techniques, rather than by checking the emptiness of the language
of a suitable automaton A_{\phi}. The technique is effective, and it has been
implemented in our Zot formal verification tool.Comment: 39 page
The 2CNF Boolean Formula Satisfiability Problem and the Linear Space Hypothesis
We aim at investigating the solvability/insolvability of nondeterministic
logarithmic-space (NL) decision, search, and optimization problems
parameterized by size parameters using simultaneously polynomial time and
sub-linear space on multi-tape deterministic Turing machines. We are
particularly focused on a special NL-complete problem, 2SAT---the 2CNF Boolean
formula satisfiability problem---parameterized by the number of Boolean
variables. It is shown that 2SAT with variables and clauses can be
solved simultaneously polynomial time and space for an absolute constant . This fact inspires us to
propose a new, practical working hypothesis, called the linear space hypothesis
(LSH), which states that 2SAT---a restricted variant of 2SAT in which each
variable of a given 2CNF formula appears at most 3 times in the form of
literals---cannot be solved simultaneously in polynomial time using strictly
"sub-linear" (i.e., for a certain constant
) space on all instances . An immediate consequence of
this working hypothesis is . Moreover, we use our
hypothesis as a plausible basis to lead to the insolvability of various NL
search problems as well as the nonapproximability of NL optimization problems.
For our investigation, since standard logarithmic-space reductions may no
longer preserve polynomial-time sub-linear-space complexity, we need to
introduce a new, practical notion of "short reduction." It turns out that,
parameterized with the number of variables, is
complete for a syntactically restricted version of NL, called Syntactic
NL, under such short reductions. This fact supports the legitimacy
of our working hypothesis.Comment: (A4, 10pt, 25 pages) This current article extends and corrects its
preliminary report in the Proc. of the 42nd International Symposium on
Mathematical Foundations of Computer Science (MFCS 2017), August 21-25, 2017,
Aalborg, Denmark, Leibniz International Proceedings in Informatics (LIPIcs),
Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik 2017, vol. 83, pp.
62:1-62:14, 201
Uniform Diagonalization Theorem for Complexity Classes of Promise Problems including Randomized and Quantum Classes
Diagonalization in the spirit of Cantor's diagonal arguments is a widely used
tool in theoretical computer sciences to obtain structural results about
computational problems and complexity classes by indirect proofs. The Uniform
Diagonalization Theorem allows the construction of problems outside complexity
classes while still being reducible to a specific decision problem. This paper
provides a generalization of the Uniform Diagonalization Theorem by extending
it to promise problems and the complexity classes they form, e.g. randomized
and quantum complexity classes. The theorem requires from the underlying
computing model not only the decidability of its acceptance and rejection
behaviour but also of its promise-contradicting indifferent behaviour - a
property that we will introduce as "total decidability" of promise problems.
Implications of the Uniform Diagonalization Theorem are mainly of two kinds:
1. Existence of intermediate problems (e.g. between BQP and QMA) - also known
as Ladner's Theorem - and 2. Undecidability if a problem of a complexity class
is contained in a subclass (e.g. membership of a QMA-problem in BQP). Like the
original Uniform Diagonalization Theorem the extension applies besides BQP and
QMA to a large variety of complexity class pairs, including combinations from
deterministic, randomized and quantum classes.Comment: 15 page
- …