217,121 research outputs found

    A cookbook for temporal conceptual data modelling with description logic

    Get PDF
    We design temporal description logics suitable for reasoning about temporal conceptual data models and investigate their computational complexity. Our formalisms are based on DL-Lite logics with three types of concept inclusions (ranging from atomic concept inclusions and disjointness to the full Booleans), as well as cardinality constraints and role inclusions. In the temporal dimension, they capture future and past temporal operators on concepts, flexible and rigid roles, the operators `always' and `some time' on roles, data assertions for particular moments of time and global concept inclusions. The logics are interpreted over the Cartesian products of object domains and the flow of time (Z,<), satisfying the constant domain assumption. We prove that the most expressive of our temporal description logics (which can capture lifespan cardinalities and either qualitative or quantitative evolution constraints) turn out to be undecidable. However, by omitting some of the temporal operators on concepts/roles or by restricting the form of concept inclusions we obtain logics whose complexity ranges between PSpace and NLogSpace. These positive results were obtained by reduction to various clausal fragments of propositional temporal logic, which opens a way to employ propositional or first-order temporal provers for reasoning about temporal data models

    Guarantees and Limits of Preprocessing in Constraint Satisfaction and Reasoning

    Full text link
    We present a first theoretical analysis of the power of polynomial-time preprocessing for important combinatorial problems from various areas in AI. We consider problems from Constraint Satisfaction, Global Constraints, Satisfiability, Nonmonotonic and Bayesian Reasoning under structural restrictions. All these problems involve two tasks: (i) identifying the structure in the input as required by the restriction, and (ii) using the identified structure to solve the reasoning task efficiently. We show that for most of the considered problems, task (i) admits a polynomial-time preprocessing to a problem kernel whose size is polynomial in a structural problem parameter of the input, in contrast to task (ii) which does not admit such a reduction to a problem kernel of polynomial size, subject to a complexity theoretic assumption. As a notable exception we show that the consistency problem for the AtMost-NValue constraint admits a polynomial kernel consisting of a quadratic number of variables and domain values. Our results provide a firm worst-case guarantees and theoretical boundaries for the performance of polynomial-time preprocessing algorithms for the considered problems.Comment: arXiv admin note: substantial text overlap with arXiv:1104.2541, arXiv:1104.556

    Dissemination and adoption of bottom-up agriculture to improve soil fertility in Africa: An interdisciplinary approach

    Get PDF
    Soil fertility is at stake at a global scale, putting pressure on food security, poverty alleviation and environmental protection, under scenarios of climate change that in most cases aggravate the threat. In sub-Saharan Africa, a combination of depleted soils and population growth adds particular pressure to smallholder farmers and society. Their capacity to innovate in a social, economic, political and cultural context is seen as decisive to reverse the trend of declining soil fertility. However, many technologies with a potential to protect, maintain and build up soil fertility are hardly used by small-scale farmers, triggering the urgent question on their reasoning not to do so. Exploring and understanding the constraints and complexity of the social systems interacting with the implied institutional dynamics are essential steps in designing appropriate agricultural innovations that are scalable and adoptable. The focus of the inter- and transdisciplinary approach applied in the project ORM4Soil (Organic Resource Management for Soil Fertility; www.orm4soil.net) lies at the heart of this project. We are combining qualitative and quantitative methods from agronomy, sociology and communication sciences in order to bring soil-fertility-enhancing-technologies and their adoption to the center of the decision-making process of farmers’ as well as local and regional institutions. At local and regional innovation platforms, stakeholders from business, government, academia and farmer organizations are discussing the outcomes of agronomic trials and sociological research. We are expecting to create bridges between the needs and concerns of farmers, relevant segments of society and policymaking, with the new common goal to enhance soil fertility

    Global Numerical Constraints on Trees

    Full text link
    We introduce a logical foundation to reason on tree structures with constraints on the number of node occurrences. Related formalisms are limited to express occurrence constraints on particular tree regions, as for instance the children of a given node. By contrast, the logic introduced in the present work can concisely express numerical bounds on any region, descendants or ancestors for instance. We prove that the logic is decidable in single exponential time even if the numerical constraints are in binary form. We also illustrate the usage of the logic in the description of numerical constraints on multi-directional path queries on XML documents. Furthermore, numerical restrictions on regular languages (XML schemas) can also be concisely described by the logic. This implies a characterization of decidable counting extensions of XPath queries and XML schemas. Moreover, as the logic is closed under negation, it can thus be used as an optimal reasoning framework for testing emptiness, containment and equivalence

    Limits of Preprocessing

    Full text link
    We present a first theoretical analysis of the power of polynomial-time preprocessing for important combinatorial problems from various areas in AI. We consider problems from Constraint Satisfaction, Global Constraints, Satisfiability, Nonmonotonic and Bayesian Reasoning. We show that, subject to a complexity theoretic assumption, none of the considered problems can be reduced by polynomial-time preprocessing to a problem kernel whose size is polynomial in a structural problem parameter of the input, such as induced width or backdoor size. Our results provide a firm theoretical boundary for the performance of polynomial-time preprocessing algorithms for the considered problems.Comment: This is a slightly longer version of a paper that appeared in the proceedings of AAAI 201

    Distribution Constraints: The Chase for Distributed Data

    Get PDF
    This paper introduces a declarative framework to specify and reason about distributions of data over computing nodes in a distributed setting. More specifically, it proposes distribution constraints which are tuple and equality generating dependencies (tgds and egds) extended with node variables ranging over computing nodes. In particular, they can express co-partitioning constraints and constraints about range-based data distributions by using comparison atoms. The main technical contribution is the study of the implication problem of distribution constraints. While implication is undecidable in general, relevant fragments of so-called data-full constraints are exhibited for which the corresponding implication problems are complete for EXPTIME, PSPACE and NP. These results yield bounds on deciding parallel-correctness for conjunctive queries in the presence of distribution constraints

    Certainty Closure: Reliable Constraint Reasoning with Incomplete or Erroneous Data

    Full text link
    Constraint Programming (CP) has proved an effective paradigm to model and solve difficult combinatorial satisfaction and optimisation problems from disparate domains. Many such problems arising from the commercial world are permeated by data uncertainty. Existing CP approaches that accommodate uncertainty are less suited to uncertainty arising due to incomplete and erroneous data, because they do not build reliable models and solutions guaranteed to address the user's genuine problem as she perceives it. Other fields such as reliable computation offer combinations of models and associated methods to handle these types of uncertain data, but lack an expressive framework characterising the resolution methodology independently of the model. We present a unifying framework that extends the CP formalism in both model and solutions, to tackle ill-defined combinatorial problems with incomplete or erroneous data. The certainty closure framework brings together modelling and solving methodologies from different fields into the CP paradigm to provide reliable and efficient approches for uncertain constraint problems. We demonstrate the applicability of the framework on a case study in network diagnosis. We define resolution forms that give generic templates, and their associated operational semantics, to derive practical solution methods for reliable solutions.Comment: Revised versio
    corecore