1,544 research outputs found

    Subsumption Algorithms for Three-Valued Geometric Resolution

    Full text link
    In our implementation of geometric resolution, the most costly operation is subsumption testing (or matching): One has to decide for a three-valued, geometric formula, if this formula is false in a given interpretation. The formula contains only atoms with variables, equality, and existential quantifiers. The interpretation contains only atoms with constants. Because the atoms have no term structure, matching for geometric resolution is hard. We translate the matching problem into a generalized constraint satisfaction problem, and discuss several approaches for solving it efficiently, one direct algorithm and two translations to propositional SAT. After that, we study filtering techniques based on local consistency checking. Such filtering techniques can a priori refute a large percentage of generalized constraint satisfaction problems. Finally, we adapt the matching algorithms in such a way that they find solutions that use a minimal subset of the interpretation. The adaptation can be combined with every matching algorithm. The techniques presented in this paper may have applications in constraint solving independent of geometric resolution.Comment: This version was revised on 18.05.201

    Induction of Interpretable Possibilistic Logic Theories from Relational Data

    Full text link
    The field of Statistical Relational Learning (SRL) is concerned with learning probabilistic models from relational data. Learned SRL models are typically represented using some kind of weighted logical formulas, which make them considerably more interpretable than those obtained by e.g. neural networks. In practice, however, these models are often still difficult to interpret correctly, as they can contain many formulas that interact in non-trivial ways and weights do not always have an intuitive meaning. To address this, we propose a new SRL method which uses possibilistic logic to encode relational models. Learned models are then essentially stratified classical theories, which explicitly encode what can be derived with a given level of certainty. Compared to Markov Logic Networks (MLNs), our method is faster and produces considerably more interpretable models.Comment: Longer version of a paper appearing in IJCAI 201

    Query Answering in Probabilistic Data and Knowledge Bases

    Get PDF
    Probabilistic data and knowledge bases are becoming increasingly important in academia and industry. They are continuously extended with new data, powered by modern information extraction tools that associate probabilities with knowledge base facts. The state of the art to store and process such data is founded on probabilistic database systems, which are widely and successfully employed. Beyond all the success stories, however, such systems still lack the fundamental machinery to convey some of the valuable knowledge hidden in them to the end user, which limits their potential applications in practice. In particular, in their classical form, such systems are typically based on strong, unrealistic limitations, such as the closed-world assumption, the closed-domain assumption, the tuple-independence assumption, and the lack of commonsense knowledge. These limitations do not only lead to unwanted consequences, but also put such systems on weak footing in important tasks, querying answering being a very central one. In this thesis, we enhance probabilistic data and knowledge bases with more realistic data models, thereby allowing for better means for querying them. Building on the long endeavor of unifying logic and probability, we develop different rigorous semantics for probabilistic data and knowledge bases, analyze their computational properties and identify sources of (in)tractability and design practical scalable query answering algorithms whenever possible. To achieve this, the current work brings together some recent paradigms from logics, probabilistic inference, and database theory

    Focusing ATMS Problem-Solving: A Formal Approach

    Get PDF
    The Assumption-based Truth Maintenance System (ATMS) is a general and powerful problem-solving tool in AI. Unfortunately, its generality usually entails a high computational cost. In this paper, we study how a general notion of cost function can be incorporated into the design of an algorithm for focusing the ATMS, called BF-ATMS. The BF-ATMS algorithm explores a search space of size polynomial in the number of assumptions, even for problems which are proven to have exponential size labels. Experimental results indicate significant speedups over the standard ATMS for such problems. In addition to its improved efficiency, the BF-ATMS algorithm retains the multiple-context capability of an ATMS, and the important properties of consistency, minimality, soundness, as well as the property of bounded completeness. The usefulness of the new algorithm is demonstrated by its application to the task of consistency-based diagnosis, where dramatic efficiency improvements, with respect to the standard solution technique, are obtained

    On Polynomial Sized MDP Succinct Policies

    Full text link
    Policies of Markov Decision Processes (MDPs) determine the next action to execute from the current state and, possibly, the history (the past states). When the number of states is large, succinct representations are often used to compactly represent both the MDPs and the policies in a reduced amount of space. In this paper, some problems related to the size of succinctly represented policies are analyzed. Namely, it is shown that some MDPs have policies that can only be represented in space super-polynomial in the size of the MDP, unless the polynomial hierarchy collapses. This fact motivates the study of the problem of deciding whether a given MDP has a policy of a given size and reward. Since some algorithms for MDPs work by finding a succinct representation of the value function, the problem of deciding the existence of a succinct representation of a value function of a given size and reward is also considered

    Logical settings for concept learning from incomplete examples in First Order Logic

    Full text link
    We investigate here concept learning from incomplete examples. Our first purpose is to discuss to what extent logical learning settings have to be modified in order to cope with data incompleteness. More precisely we are interested in extending the learning from interpretations setting introduced by L. De Raedt that extends to relational representations the classical propositional (or attribute-value) concept learning from examples framework. We are inspired here by ideas presented by H. Hirsh in a work extending the Version space inductive paradigm to incomplete data. H. Hirsh proposes to slightly modify the notion of solution when dealing with incomplete examples: a solution has to be a hypothesis compatible with all pieces of information concerning the examples. We identify two main classes of incompleteness. First, uncertainty deals with our state of knowledge concerning an example. Second, generalization (or abstraction) deals with what part of the description of the example is sufficient for the learning purpose. These two main sources of incompleteness can be mixed up when only part of the useful information is known. We discuss a general learning setting, referred to as "learning from possibilities" that formalizes these ideas, then we present a more specific learning setting, referred to as "assumption-based learning" that cope with examples which uncertainty can be reduced when considering contextual information outside of the proper description of the examples. Assumption-based learning is illustrated on a recent work concerning the prediction of a consensus secondary structure common to a set of RNA sequences

    Stochastic Constraint Programming

    Full text link
    To model combinatorial decision problems involving uncertainty and probability, we introduce stochastic constraint programming. Stochastic constraint programs contain both decision variables (which we can set) and stochastic variables (which follow a probability distribution). They combine together the best features of traditional constraint satisfaction, stochastic integer programming, and stochastic satisfiability. We give a semantics for stochastic constraint programs, and propose a number of complete algorithms and approximation procedures. Finally, we discuss a number of extensions of stochastic constraint programming to relax various assumptions like the independence between stochastic variables, and compare with other approaches for decision making under uncertainty.Comment: Proceedings of the 15th Eureopean Conference on Artificial Intelligenc
    corecore