813 research outputs found

    Syntactic characterizations of polynomial time optimization classes

    Get PDF
    The characterization of important complexity classes by logical descriptions has been an important and prolific area of Descriptive complexity. However, the central focus of the research has been the study of classes like P, NP, L and NL, corresponding to decision problems (e.g. the characterization of NP by Fagin [Fag74] and of P by Gradel [E. 91]). In contrast, optimization problems have received much less attention. Optimization problems corresponding to the NP class have been characterized in terms of logic expressions by Papadimitriou and Yannakakis, Panconesi and Ranjan, Kolaitis and Thakur, Khanna et al, and by Zimand. In this paper, we attempt to characterize the optimization versions of P via expressions in second order logic, many of them using universal Horn formulae with successor relations. These results nicely complement those of Kolaitis and Thakur [KT94] for polynomially bounded NP-optimization problems. The polynomially bounded versions of maximization and minimization problems are treated first, and then the maximization problems in the not necessarily polynomially bounded class

    Frameworks for logically classifying polynomial-time optimisation problems.

    Get PDF
    We show that a logical framework, based around a fragment of existential second-order logic formerly proposed by others so as to capture the class of polynomially-bounded P-optimisation problems, cannot hope to do so, under the assumption that P ≠ NP. We do this by exhibiting polynomially-bounded maximisation and minimisation problems that can be expressed in the framework but whose decision versions are NP-complete. We propose an alternative logical framework, based around inflationary fixed-point logic, and show that we can capture the above classes of optimisation problems. We use the inductive depth of an inflationary fixed-point as a means to describe the objective functions of the instances of our optimisation problems

    Strong Equivalence of Qualitative Optimization Problems

    Get PDF
    We introduce the framework of qualitative optimization problems (or, simply, optimization problems) to represent preference theories. The formalism uses separate modules to describe the space of outcomes to be compared (the generator) and the preferences on outcomes (the selector). We consider two types of optimization problems. They differ in the way the generator, which we model by a propositional theory, is interpreted: by the standard propositional logic semantics, and by the equilibrium-model (answer-set) semantics. Under the latter interpretation of generators, optimization problems directly generalize answer-set optimization programs proposed previously. We study strong equivalence of optimization problems, which guarantees their interchangeability within any larger context. We characterize several versions of strong equivalence obtained by restricting the class of optimization problems that can be used as extensions and establish the complexity of associated reasoning tasks. Understanding strong equivalence is essential for modular representation of optimization problems and rewriting techniques to simplify them without changing their inherent properties

    The 2CNF Boolean Formula Satisfiability Problem and the Linear Space Hypothesis

    Full text link
    We aim at investigating the solvability/insolvability of nondeterministic logarithmic-space (NL) decision, search, and optimization problems parameterized by size parameters using simultaneously polynomial time and sub-linear space on multi-tape deterministic Turing machines. We are particularly focused on a special NL-complete problem, 2SAT---the 2CNF Boolean formula satisfiability problem---parameterized by the number of Boolean variables. It is shown that 2SAT with nn variables and mm clauses can be solved simultaneously polynomial time and (n/2clogn)polylog(m+n)(n/2^{c\sqrt{\log{n}}})\, polylog(m+n) space for an absolute constant c>0c>0. This fact inspires us to propose a new, practical working hypothesis, called the linear space hypothesis (LSH), which states that 2SAT3_3---a restricted variant of 2SAT in which each variable of a given 2CNF formula appears at most 3 times in the form of literals---cannot be solved simultaneously in polynomial time using strictly "sub-linear" (i.e., m(x)εpolylog(x)m(x)^{\varepsilon}\, polylog(|x|) for a certain constant ε(0,1)\varepsilon\in(0,1)) space on all instances xx. An immediate consequence of this working hypothesis is LNL\mathrm{L}\neq\mathrm{NL}. Moreover, we use our hypothesis as a plausible basis to lead to the insolvability of various NL search problems as well as the nonapproximability of NL optimization problems. For our investigation, since standard logarithmic-space reductions may no longer preserve polynomial-time sub-linear-space complexity, we need to introduce a new, practical notion of "short reduction." It turns out that, parameterized with the number of variables, 2SAT3\overline{\mathrm{2SAT}_3} is complete for a syntactically restricted version of NL, called Syntactic NLω_{\omega}, under such short reductions. This fact supports the legitimacy of our working hypothesis.Comment: (A4, 10pt, 25 pages) This current article extends and corrects its preliminary report in the Proc. of the 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017), August 21-25, 2017, Aalborg, Denmark, Leibniz International Proceedings in Informatics (LIPIcs), Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik 2017, vol. 83, pp. 62:1-62:14, 201

    Observation of implicit complexity by non confluence

    Get PDF
    We propose to consider non confluence with respect to implicit complexity. We come back to some well known classes of first-order functional program, for which we have a characterization of their intentional properties, namely the class of cons-free programs, the class of programs with an interpretation, and the class of programs with a quasi-interpretation together with a termination proof by the product path ordering. They all correspond to PTIME. We prove that adding non confluence to the rules leads to respectively PTIME, NPTIME and PSPACE. Our thesis is that the separation of the classes is actually a witness of the intentional properties of the initial classes of programs
    corecore