30,753 research outputs found

    The Complexity of Model Checking Higher-Order Fixpoint Logic

    Full text link
    Higher-Order Fixpoint Logic (HFL) is a hybrid of the simply typed \lambda-calculus and the modal \lambda-calculus. This makes it a highly expressive temporal logic that is capable of expressing various interesting correctness properties of programs that are not expressible in the modal \lambda-calculus. This paper provides complexity results for its model checking problem. In particular we consider those fragments of HFL built by using only types of bounded order k and arity m. We establish k-fold exponential time completeness for model checking each such fragment. For the upper bound we use fixpoint elimination to obtain reachability games that are singly-exponential in the size of the formula and k-fold exponential in the size of the underlying transition system. These games can be solved in deterministic linear time. As a simple consequence, we obtain an exponential time upper bound on the expression complexity of each such fragment. The lower bound is established by a reduction from the word problem for alternating (k-1)-fold exponential space bounded Turing Machines. Since there are fixed machines of that type whose word problems are already hard with respect to k-fold exponential time, we obtain, as a corollary, k-fold exponential time completeness for the data complexity of our fragments of HFL, provided m exceeds 3. This also yields a hierarchy result in expressive power.Comment: 33 pages, 2 figures, to be published in Logical Methods in Computer Scienc

    Power of Quantum Computation with Few Clean Qubits

    Get PDF
    This paper investigates the power of polynomial-time quantum computation in which only a very limited number of qubits are initially clean in the |0> state, and all the remaining qubits are initially in the totally mixed state. No initializations of qubits are allowed during the computation, nor intermediate measurements. The main results of this paper are unexpectedly strong error-reducible properties of such quantum computations. It is proved that any problem solvable by a polynomial-time quantum computation with one-sided bounded error that uses logarithmically many clean qubits can also be solvable with exponentially small one-sided error using just two clean qubits, and with polynomially small one-sided error using just one clean qubit. It is further proved in the case of two-sided bounded error that any problem solvable by such a computation with a constant gap between completeness and soundness using logarithmically many clean qubits can also be solvable with exponentially small two-sided error using just two clean qubits. If only one clean qubit is available, the problem is again still solvable with exponentially small error in one of the completeness and soundness and polynomially small error in the other. As an immediate consequence of the above result for the two-sided-error case, it follows that the TRACE ESTIMATION problem defined with fixed constant threshold parameters is complete for the classes of problems solvable by polynomial-time quantum computations with completeness 2/3 and soundness 1/3 using logarithmically many clean qubits and just one clean qubit. The techniques used for proving the error-reduction results may be of independent interest in themselves, and one of the technical tools can also be used to show the hardness of weak classical simulations of one-clean-qubit computations (i.e., DQC1 computations).Comment: 44 pages + cover page; the results in Section 8 are overlapping with the main results in arXiv:1409.677

    Delta-Complete Decision Procedures for Satisfiability over the Reals

    Full text link
    We introduce the notion of "\delta-complete decision procedures" for solving SMT problems over the real numbers, with the aim of handling a wide range of nonlinear functions including transcendental functions and solutions of Lipschitz-continuous ODEs. Given an SMT problem \varphi and a positive rational number \delta, a \delta-complete decision procedure determines either that \varphi is unsatisfiable, or that the "\delta-weakening" of \varphi is satisfiable. Here, the \delta-weakening of \varphi is a variant of \varphi that allows \delta-bounded numerical perturbations on \varphi. We prove the existence of \delta-complete decision procedures for bounded SMT over reals with functions mentioned above. For functions in Type 2 complexity class C, under mild assumptions, the bounded \delta-SMT problem is in NP^C. \delta-Complete decision procedures can exploit scalable numerical methods for handling nonlinearity, and we propose to use this notion as an ideal requirement for numerically-driven decision procedures. As a concrete example, we formally analyze the DPLL framework, which integrates Interval Constraint Propagation (ICP) in DPLL(T), and establish necessary and sufficient conditions for its \delta-completeness. We discuss practical applications of \delta-complete decision procedures for correctness-critical applications including formal verification and theorem proving.Comment: A shorter version appears in IJCAR 201

    The Complexity of Planning Revisited - A Parameterized Analysis

    Full text link
    The early classifications of the computational complexity of planning under various restrictions in STRIPS (Bylander) and SAS+ (Baeckstroem and Nebel) have influenced following research in planning in many ways. We go back and reanalyse their subclasses, but this time using the more modern tool of parameterized complexity analysis. This provides new results that together with the old results give a more detailed picture of the complexity landscape. We demonstrate separation results not possible with standard complexity theory, which contributes to explaining why certain cases of planning have seemed simpler in practice than theory has predicted. In particular, we show that certain restrictions of practical interest are tractable in the parameterized sense of the term, and that a simple heuristic is sufficient to make a well-known partial-order planner exploit this fact.Comment: (author's self-archived copy

    Constraint LTL Satisfiability Checking without Automata

    Get PDF
    This paper introduces a novel technique to decide the satisfiability of formulae written in the language of Linear Temporal Logic with Both future and past operators and atomic formulae belonging to constraint system D (CLTLB(D) for short). The technique is based on the concept of bounded satisfiability, and hinges on an encoding of CLTLB(D) formulae into QF-EUD, the theory of quantifier-free equality and uninterpreted functions combined with D. Similarly to standard LTL, where bounded model-checking and SAT-solvers can be used as an alternative to automata-theoretic approaches to model-checking, our approach allows users to solve the satisfiability problem for CLTLB(D) formulae through SMT-solving techniques, rather than by checking the emptiness of the language of a suitable automaton A_{\phi}. The technique is effective, and it has been implemented in our Zot formal verification tool.Comment: 39 page

    The parameterized space complexity of model-checking bounded variable first-order logic

    Get PDF
    The parameterized model-checking problem for a class of first-order sentences (queries) asks to decide whether a given sentence from the class holds true in a given relational structure (database); the parameter is the length of the sentence. We study the parameterized space complexity of the model-checking problem for queries with a bounded number of variables. For each bound on the quantifier alternation rank the problem becomes complete for the corresponding level of what we call the tree hierarchy, a hierarchy of parameterized complexity classes defined via space bounded alternating machines between parameterized logarithmic space and fixed-parameter tractable time. We observe that a parameterized logarithmic space model-checker for existential bounded variable queries would allow to improve Savitch's classical simulation of nondeterministic logarithmic space in deterministic space O(log2n)O(\log^2n). Further, we define a highly space efficient model-checker for queries with a bounded number of variables and bounded quantifier alternation rank. We study its optimality under the assumption that Savitch's Theorem is optimal

    Do Hard SAT-Related Reasoning Tasks Become Easier in the Krom Fragment?

    Full text link
    Many reasoning problems are based on the problem of satisfiability (SAT). While SAT itself becomes easy when restricting the structure of the formulas in a certain way, the situation is more opaque for more involved decision problems. We consider here the CardMinSat problem which asks, given a propositional formula ϕ\phi and an atom xx, whether xx is true in some cardinality-minimal model of ϕ\phi. This problem is easy for the Horn fragment, but, as we will show in this paper, remains Θ2\Theta_2-complete (and thus NP\mathrm{NP}-hard) for the Krom fragment (which is given by formulas in CNF where clauses have at most two literals). We will make use of this fact to study the complexity of reasoning tasks in belief revision and logic-based abduction and show that, while in some cases the restriction to Krom formulas leads to a decrease of complexity, in others it does not. We thus also consider the CardMinSat problem with respect to additional restrictions to Krom formulas towards a better understanding of the tractability frontier of such problems

    Efficient Open World Reasoning for Planning

    Full text link
    We consider the problem of reasoning and planning with incomplete knowledge and deterministic actions. We introduce a knowledge representation scheme called PSIPLAN that can effectively represent incompleteness of an agent's knowledge while allowing for sound, complete and tractable entailment in domains where the set of all objects is either unknown or infinite. We present a procedure for state update resulting from taking an action in PSIPLAN that is correct, complete and has only polynomial complexity. State update is performed without considering the set of all possible worlds corresponding to the knowledge state. As a result, planning with PSIPLAN is done without direct manipulation of possible worlds. PSIPLAN representation underlies the PSIPOP planning algorithm that handles quantified goals with or without exceptions that no other domain independent planner has been shown to achieve. PSIPLAN has been implemented in Common Lisp and used in an application on planning in a collaborative interface.Comment: 39 pages, 13 figures. to appear in Logical Methods in Computer Scienc

    Model Checking CTL is Almost Always Inherently Sequential

    Get PDF
    The model checking problem for CTL is known to be P-complete (Clarke, Emerson, and Sistla (1986), see Schnoebelen (2002)). We consider fragments of CTL obtained by restricting the use of temporal modalities or the use of negations—restrictions already studied for LTL by Sistla and Clarke (1985) and Markey (2004). For all these fragments, except for the trivial case without any temporal operator, we systematically prove model checking to be either inherently sequential (P-complete) or very efficiently parallelizable (LOGCFL-complete). For most fragments, however, model checking for CTL is already P-complete. Hence our results indicate that in most applications, approaching CTL model checking by parallelism will not result in the desired speed up. We also completely determine the complexity of the model checking problem for all fragments of the extensions ECTL, CTL +, and ECTL +
    corecore