7,858 research outputs found

    Computing only minimal answers in disjunctive deductive databases

    Full text link
    A method is presented for computing minimal answers in disjunctive deductive databases under the disjunctive stable model semantics. Such answers are constructed by repeatedly extending partial answers. Our method is complete (in that every minimal answer can be computed) and does not admit redundancy (in the sense that every partial answer generated can be extended to a minimal answer), whence no non-minimal answer is generated. For stratified databases, the method does not (necessarily) require the computation of models of the database in their entirety. Compilation is proposed as a tool by which problems relating to computational efficiency and the non-existence of disjunctive stable models can be overcome. The extension of our method to other semantics is also considered.Comment: 48 page

    Parameterized Verification of Graph Transformation Systems with Whole Neighbourhood Operations

    Full text link
    We introduce a new class of graph transformation systems in which rewrite rules can be guarded by universally quantified conditions on the neighbourhood of nodes. These conditions are defined via special graph patterns which may be transformed by the rule as well. For the new class for graph rewrite rules, we provide a symbolic procedure working on minimal representations of upward closed sets of configurations. We prove correctness and effectiveness of the procedure by a categorical presentation of rewrite rules as well as the involved order, and using results for well-structured transition systems. We apply the resulting procedure to the analysis of the Distributed Dining Philosophers protocol on an arbitrary network structure.Comment: Extended version of a submittion accepted at RP'14 Worksho

    Reasoning about Explanations for Negative Query Answers in DL-Lite

    Full text link
    In order to meet usability requirements, most logic-based applications provide explanation facilities for reasoning services. This holds also for Description Logics, where research has focused on the explanation of both TBox reasoning and, more recently, query answering. Besides explaining the presence of a tuple in a query answer, it is important to explain also why a given tuple is missing. We address the latter problem for instance and conjunctive query answering over DL-Lite ontologies by adopting abductive reasoning; that is, we look for additions to the ABox that force a given tuple to be in the result. As reasoning tasks we consider existence and recognition of an explanation, and relevance and necessity of a given assertion for an explanation. We characterize the computational complexity of these problems for arbitrary, subset minimal, and cardinality minimal explanations

    Tuning the Level of Concurrency in Software Transactional Memory: An Overview of Recent Analytical, Machine Learning and Mixed Approaches

    Get PDF
    Synchronization transparency offered by Software Transactional Memory (STM) must not come at the expense of run-time efficiency, thus demanding from the STM-designer the inclusion of mechanisms properly oriented to performance and other quality indexes. Particularly, one core issue to cope with in STM is related to exploiting parallelism while also avoiding thrashing phenomena due to excessive transaction rollbacks, caused by excessively high levels of contention on logical resources, namely concurrently accessed data portions. A means to address run-time efficiency consists in dynamically determining the best-suited level of concurrency (number of threads) to be employed for running the application (or specific application phases) on top of the STM layer. For too low levels of concurrency, parallelism can be hampered. Conversely, over-dimensioning the concurrency level may give rise to the aforementioned thrashing phenomena caused by excessive data contention—an aspect which has reflections also on the side of reduced energy-efficiency. In this chapter we overview a set of recent techniques aimed at building “application-specific” performance models that can be exploited to dynamically tune the level of concurrency to the best-suited value. Although they share some base concepts while modeling the system performance vs the degree of concurrency, these techniques rely on disparate methods, such as machine learning or analytic methods (or combinations of the two), and achieve different tradeoffs in terms of the relation between the precision of the performance model and the latency for model instantiation. Implications of the different tradeoffs in real-life scenarios are also discussed

    Super Logic Programs

    Full text link
    The Autoepistemic Logic of Knowledge and Belief (AELB) is a powerful nonmonotic formalism introduced by Teodor Przymusinski in 1994. In this paper, we specialize it to a class of theories called `super logic programs'. We argue that these programs form a natural generalization of standard logic programs. In particular, they allow disjunctions and default negation of arbibrary positive objective formulas. Our main results are two new and powerful characterizations of the static semant ics of these programs, one syntactic, and one model-theoretic. The syntactic fixed point characterization is much simpler than the fixed point construction of the static semantics for arbitrary AELB theories. The model-theoretic characterization via Kripke models allows one to construct finite representations of the inherently infinite static expansions. Both characterizations can be used as the basis of algorithms for query answering under the static semantics. We describe a query-answering interpreter for super programs which we developed based on the model-theoretic characterization and which is available on the web.Comment: 47 pages, revised version of the paper submitted 10/200

    The C++0x "Concepts" Effort

    Full text link
    C++0x is the working title for the revision of the ISO standard of the C++ programming language that was originally planned for release in 2009 but that was delayed to 2011. The largest language extension in C++0x was "concepts", that is, a collection of features for constraining template parameters. In September of 2008, the C++ standards committee voted the concepts extension into C++0x, but then in July of 2009, the committee voted the concepts extension back out of C++0x. This article is my account of the technical challenges and debates within the "concepts" effort in the years 2003 to 2009. To provide some background, the article also describes the design space for constrained parametric polymorphism, or what is colloquially know as constrained generics. While this article is meant to be generally accessible, the writing is aimed toward readers with background in functional programming and programming language theory. This article grew out of a lecture at the Spring School on Generic and Indexed Programming at the University of Oxford, March 2010
    • …
    corecore