44,954 research outputs found

    Encoding Markov Logic Networks in Possibilistic Logic

    Get PDF
    Markov logic uses weighted formulas to compactly encode a probability distribution over possible worlds. Despite the use of logical formulas, Markov logic networks (MLNs) can be difficult to interpret, due to the often counter-intuitive meaning of their weights. To address this issue, we propose a method to construct a possibilistic logic theory that exactly captures what can be derived from a given MLN using maximum a posteriori (MAP) inference. Unfortunately, the size of this theory is exponential in general. We therefore also propose two methods which can derive compact theories that still capture MAP inference, but only for specific types of evidence. These theories can be used, among others, to make explicit the hidden assumptions underlying an MLN or to explain the predictions it makes.Comment: Extended version of a paper appearing in UAI 201

    Towards Log-Linear Logics with Concrete Domains

    Full text link
    We present MEL++\mathcal{MEL}^{++} (M denotes Markov logic networks) an extension of the log-linear description logics EL++\mathcal{EL}^{++}-LL with concrete domains, nominals, and instances. We use Markov logic networks (MLNs) in order to find the most probable, classified and coherent EL++\mathcal{EL}^{++} ontology from an MEL++\mathcal{MEL}^{++} knowledge base. In particular, we develop a novel way to deal with concrete domains (also known as datatypes) by extending MLN's cutting plane inference (CPI) algorithm.Comment: StarAI201

    Quantified Markov logic networks

    Get PDF
    Markov Logic Networks (MLNs) are well-suited for expressing statistics such as “with high probability a smoker knows another smoker” but not for expressing statements such as “there is a smoker who knows most other smokers”, which is necessary for modeling, e.g. influencers in social networks. To overcome this shortcoming, we study quantified MLNs which generalize MLNs by introducing statistical universal quantifiers, allowing to express also the latter type of statistics in a principled way. Our main technical contribution is to show that the standard reasoning tasks in quantified MLNs, maximum a posteriori and marginal inference, can be reduced to their respective MLN counterparts in polynomial time

    Backpropagating through Markov Logic Networks

    Get PDF
    We integrate Markov Logic networks with deep learning architectures operating on high-dimensional and noisy feature inputs. Instead of relaxing the discrete components into smooth functions, we propose an approach that allows us to backpropagate through standard statistical relational learning components using perturbation-based differentiation. The resulting hybrid models are shown to outperform models solely relying on deep learning based function fitting. We find that using noise perturbations is required to allow the proposed hybrid models to robustly learn from the training data
    • …
    corecore