1,090 research outputs found

    Towards the implementation of a preference-and uncertain-aware solver using answer set programming

    Get PDF
    Logic programs with possibilistic ordered disjunction (or LPPODs) are a recently defined logic-programming framework based on logic programs with ordered disjunction and possibilistic logic. The framework inherits the properties of such formalisms and merging them, it supports a reasoning which is nonmonotonic, preference-and uncertain-aware. The LPPODs syntax allows to specify 1) preferences in a qualitative way, and 2) necessity values about the certainty of program clauses. As a result at semantic level, preferences and necessity values can be used to specify an order among program solutions. This class of program therefore fits well in the representation of decision problems where a best option has to be chosen taking into account both preferences and necessity measures about information. In this paper we study the computation and the complexity of the LPPODs semantics and we describe the algorithm for its implementation following on Answer Set Programming approach. We describe some decision scenarios where the solver can be used to choose the best solutions by checking whether an outcome is possibilistically preferred over another considering preferences and uncertainty at the same time.Postprint (published version

    Encoding Markov Logic Networks in Possibilistic Logic

    Get PDF
    Markov logic uses weighted formulas to compactly encode a probability distribution over possible worlds. Despite the use of logical formulas, Markov logic networks (MLNs) can be difficult to interpret, due to the often counter-intuitive meaning of their weights. To address this issue, we propose a method to construct a possibilistic logic theory that exactly captures what can be derived from a given MLN using maximum a posteriori (MAP) inference. Unfortunately, the size of this theory is exponential in general. We therefore also propose two methods which can derive compact theories that still capture MAP inference, but only for specific types of evidence. These theories can be used, among others, to make explicit the hidden assumptions underlying an MLN or to explain the predictions it makes.Comment: Extended version of a paper appearing in UAI 201

    Induction of Interpretable Possibilistic Logic Theories from Relational Data

    Full text link
    The field of Statistical Relational Learning (SRL) is concerned with learning probabilistic models from relational data. Learned SRL models are typically represented using some kind of weighted logical formulas, which make them considerably more interpretable than those obtained by e.g. neural networks. In practice, however, these models are often still difficult to interpret correctly, as they can contain many formulas that interact in non-trivial ways and weights do not always have an intuitive meaning. To address this, we propose a new SRL method which uses possibilistic logic to encode relational models. Learned models are then essentially stratified classical theories, which explicitly encode what can be derived with a given level of certainty. Compared to Markov Logic Networks (MLNs), our method is faster and produces considerably more interpretable models.Comment: Longer version of a paper appearing in IJCAI 201
    • …
    corecore