305 research outputs found

    Composition with Target Constraints

    Full text link
    It is known that the composition of schema mappings, each specified by source-to-target tgds (st-tgds), can be specified by a second-order tgd (SO tgd). We consider the question of what happens when target constraints are allowed. Specifically, we consider the question of specifying the composition of standard schema mappings (those specified by st-tgds, target egds, and a weakly acyclic set of target tgds). We show that SO tgds, even with the assistance of arbitrary source constraints and target constraints, cannot specify in general the composition of two standard schema mappings. Therefore, we introduce source-to-target second-order dependencies (st-SO dependencies), which are similar to SO tgds, but allow equations in the conclusion. We show that st-SO dependencies (along with target egds and target tgds) are sufficient to express the composition of every finite sequence of standard schema mappings, and further, every st-SO dependency specifies such a composition. In addition to this expressive power, we show that st-SO dependencies enjoy other desirable properties. In particular, they have a polynomial-time chase that generates a universal solution. This universal solution can be used to find the certain answers to unions of conjunctive queries in polynomial time. It is easy to show that the composition of an arbitrary number of standard schema mappings is equivalent to the composition of only two standard schema mappings. We show that surprisingly, the analogous result holds also for schema mappings specified by just st-tgds (no target constraints). This is proven by showing that every SO tgd is equivalent to an unnested SO tgd (one where there is no nesting of function symbols). Similarly, we prove unnesting results for st-SO dependencies, with the same types of consequences.Comment: This paper is an extended version of: M. Arenas, R. Fagin, and A. Nash. Composition with Target Constraints. In 13th International Conference on Database Theory (ICDT), pages 129-142, 201

    Descriptive Complexity of Deterministic Polylogarithmic Time and Space

    Full text link
    We propose logical characterizations of problems solvable in deterministic polylogarithmic time (PolylogTime) and polylogarithmic space (PolylogSpace). We introduce a novel two-sorted logic that separates the elements of the input domain from the bit positions needed to address these elements. We prove that the inflationary and partial fixed point vartiants of this logic capture PolylogTime and PolylogSpace, respectively. In the course of proving that our logic indeed captures PolylogTime on finite ordered structures, we introduce a variant of random-access Turing machines that can access the relations and functions of a structure directly. We investigate whether an explicit predicate for the ordering of the domain is needed in our PolylogTime logic. Finally, we present the open problem of finding an exact characterization of order-invariant queries in PolylogTime.Comment: Submitted to the Journal of Computer and System Science

    Randomisation and Derandomisation in Descriptive Complexity Theory

    Full text link
    We study probabilistic complexity classes and questions of derandomisation from a logical point of view. For each logic L we introduce a new logic BPL, bounded error probabilistic L, which is defined from L in a similar way as the complexity class BPP, bounded error probabilistic polynomial time, is defined from PTIME. Our main focus lies on questions of derandomisation, and we prove that there is a query which is definable in BPFO, the probabilistic version of first-order logic, but not in Cinf, finite variable infinitary logic with counting. This implies that many of the standard logics of finite model theory, like transitive closure logic and fixed-point logic, both with and without counting, cannot be derandomised. Similarly, we present a query on ordered structures which is definable in BPFO but not in monadic second-order logic, and a query on additive structures which is definable in BPFO but not in FO. The latter of these queries shows that certain uniform variants of AC0 (bounded-depth polynomial sized circuits) cannot be derandomised. These results are in contrast to the general belief that most standard complexity classes can be derandomised. Finally, we note that BPIFP+C, the probabilistic version of fixed-point logic with counting, captures the complexity class BPP, even on unordered structures

    An operator representation for Matsubara sums

    Full text link
    In the context of the imaginary-time formalism for a scalar thermal field theory, it is shown that the result of performing the sums over Matsubara frequencies associated with loop Feynman diagrams can be written, for some classes of diagrams, in terms of the action of a simple linear operator on the corresponding energy integrals of the Euclidean theory at T=0. In its simplest form the referred operator depends only on the number of internal propagators of the graph. More precisely, it is shown explicitly that this \emph{thermal operator representation} holds for two generic classes of diagrams, namely, the two-vertex diagram with an arbitrary number of internal propagators, and the one-loop diagram with an arbitrary number of vertices. The validity of the thermal operator representation for diagrams of more complicated topologies remains an open problem. Its correctness is shown to be equivalent to the correctness of some diagrammatic rules proposed a few years ago.Comment: 4 figures; references added, minor changes in notation, final version accepted for publicatio

    Improving Strategies via SMT Solving

    Full text link
    We consider the problem of computing numerical invariants of programs by abstract interpretation. Our method eschews two traditional sources of imprecision: (i) the use of widening operators for enforcing convergence within a finite number of iterations (ii) the use of merge operations (often, convex hulls) at the merge points of the control flow graph. It instead computes the least inductive invariant expressible in the domain at a restricted set of program points, and analyzes the rest of the code en bloc. We emphasize that we compute this inductive invariant precisely. For that we extend the strategy improvement algorithm of [Gawlitza and Seidl, 2007]. If we applied their method directly, we would have to solve an exponentially sized system of abstract semantic equations, resulting in memory exhaustion. Instead, we keep the system implicit and discover strategy improvements using SAT modulo real linear arithmetic (SMT). For evaluating strategies we use linear programming. Our algorithm has low polynomial space complexity and performs for contrived examples in the worst case exponentially many strategy improvement steps; this is unsurprising, since we show that the associated abstract reachability problem is Pi-p-2-complete

    Elektrisch aktive Sonden für die Anzeige von molekularen Transportvorgängen in porösen Festkörpern

    Get PDF
    Microcrystalline samples of SiO2_{2}, Al2_{2}O3_{3}, MgO, NaCl, clay minerals and zeolites have been compacted between conducting electrodes. The porous solids under mechanical stress create an electrical power (EMF) of the order of magnitude μ\muW/cm3^{3} if polar, dissociating molecules as e.g. H2_{2}O, CH3_{3} OH, C2_{2}H5_{5}OH are adsorbed. The EMF of electrochemically symmetric solid cells with a large internal surface under stresswas measured as a function of time after changes of the partial pressure of water in the surroundings and after variation of the temperature. The oscillating reaction of the EMF indicates a nonlinear, coupled system of heat-, mass- and eletrical charge transport processes. The time averaged EMF in the stationary state is proposed to be used for the indication of the internal surface coverage with polar, dissociated molecules

    Evaluating QBF Solvers: Quantifier Alternations Matter

    Full text link
    We present an experimental study of the effects of quantifier alternations on the evaluation of quantified Boolean formula (QBF) solvers. The number of quantifier alternations in a QBF in prenex conjunctive normal form (PCNF) is directly related to the theoretical hardness of the respective QBF satisfiability problem in the polynomial hierarchy. We show empirically that the performance of solvers based on different solving paradigms substantially varies depending on the numbers of alternations in PCNFs. In related theoretical work, quantifier alternations have become the focus of understanding the strengths and weaknesses of various QBF proof systems implemented in solvers. Our results motivate the development of methods to evaluate orthogonal solving paradigms by taking quantifier alternations into account. This is necessary to showcase the broad range of existing QBF solving paradigms for practical QBF applications. Moreover, we highlight the potential of combining different approaches and QBF proof systems in solvers.Comment: preprint of a paper to be published at CP 2018, LNCS, Springer, including appendi

    The Complexity of Computing Minimal Unidirectional Covering Sets

    Full text link
    Given a binary dominance relation on a set of alternatives, a common thread in the social sciences is to identify subsets of alternatives that satisfy certain notions of stability. Examples can be found in areas as diverse as voting theory, game theory, and argumentation theory. Brandt and Fischer [BF08] proved that it is NP-hard to decide whether an alternative is contained in some inclusion-minimal upward or downward covering set. For both problems, we raise this lower bound to the Theta_{2}^{p} level of the polynomial hierarchy and provide a Sigma_{2}^{p} upper bound. Relatedly, we show that a variety of other natural problems regarding minimal or minimum-size covering sets are hard or complete for either of NP, coNP, and Theta_{2}^{p}. An important consequence of our results is that neither minimal upward nor minimal downward covering sets (even when guaranteed to exist) can be computed in polynomial time unless P=NP. This sharply contrasts with Brandt and Fischer's result that minimal bidirectional covering sets (i.e., sets that are both minimal upward and minimal downward covering sets) are polynomial-time computable.Comment: 27 pages, 7 figure

    On the (parameterized) complexity of recognizing well-covered (r,l)-graphs.

    Get PDF
    An (r,ℓ)(r,ℓ)-partition of a graph G is a partition of its vertex set into r independent sets and ℓℓ cliques. A graph is (r,ℓ)(r,ℓ) if it admits an (r,ℓ)(r,ℓ)-partition. A graph is well-covered if every maximal independent set is also maximum. A graph is (r,ℓ)(r,ℓ)-well-covered if it is both (r,ℓ)(r,ℓ) and well-covered. In this paper we consider two different decision problems. In the (r,ℓ)(r,ℓ)-Well-Covered Graph problem ((r,ℓ)(r,ℓ) wcg for short), we are given a graph G, and the question is whether G is an (r,ℓ)(r,ℓ)-well-covered graph. In the Well-Covered (r,ℓ)(r,ℓ)-Graph problem (wc (r,ℓ)(r,ℓ) g for short), we are given an (r,ℓ)(r,ℓ)-graph G together with an (r,ℓ)(r,ℓ)-partition of V(G) into r independent sets and ℓℓ cliques, and the question is whether G is well-covered. We classify most of these problems into P, coNP-complete, NP-complete, NP-hard, or coNP-hard. Only the cases wc(r, 0)g for r≥3r≥3 remain open. In addition, we consider the parameterized complexity of these problems for several choices of parameters, such as the size αα of a maximum independent set of the input graph, its neighborhood diversity, or the number ℓℓ of cliques in an (r,ℓ)(r,ℓ)-partition. In particular, we show that the parameterized problem of deciding whether a general graph is well-covered parameterized by αα can be reduced to the wc (0,ℓ)(0,ℓ) g problem parameterized by ℓℓ, and we prove that this latter problem is in XP but does not admit polynomial kernels unless coNP⊆NP/polycoNP⊆NP/poly

    Biabduction (and related problems) in array separation logic

    Get PDF
    We investigate array separation logic (\mathsf {ASL}), a variant of symbolic-heap separation logic in which the data structures are either pointers or arrays, i.e., contiguous blocks of memory. This logic provides a language for compositional memory safety proofs of array programs. We focus on the biabduction problem for this logic, which has been established as the key to automatic specification inference at the industrial scale. We present an \mathsf {NP} decision procedure for biabduction in \mathsf {ASL}, and we also show that the problem of finding a consistent solution is \mathsf {NP}-hard. Along the way, we study satisfiability and entailment in \mathsf {ASL}, giving decision procedures and complexity bounds for both problems. We show satisfiability to be \mathsf {NP}-complete, and entailment to be decidable with high complexity. The surprising fact that biabduction is simpler than entailment is due to the fact that, as we show, the element of choice over biabduction solutions enables us to dramatically reduce the search space
    corecore