1,752 research outputs found

    Lifted Probabilistic Inference: An MCMC Perspective

    Get PDF
    The general consensus seems to be that lifted inference is concerned with exploiting model symmetries and grouping indistinguishable objects at inference time. Since first-order probabilistic formalisms are essentially tem- plate languages providing a more compact representation of a corresponding ground model, lifted inference tends to work especially well in these models. We show that the notion of indistinguishability manifests itself on several dferent levels {the level of constants, the level of ground atoms (variables), the level of formulas (features), and the level of assignments (possible worlds). We discuss existing work in the MCMC literature on ex- ploiting symmetries on the level of variable assignments and relate it to novel results in lifted MCMC

    Graphical Models and Symmetries : Loopy Belief Propagation Approaches

    Get PDF
    Whenever a person or an automated system has to reason in uncertain domains, probability theory is necessary. Probabilistic graphical models allow us to build statistical models that capture complex dependencies between random variables. Inference in these models, however, can easily become intractable. Typical ways to address this scaling issue are inference by approximate message-passing, stochastic gradients, and MapReduce, among others. Exploiting the symmetries of graphical models, however, has not yet been considered for scaling statistical machine learning applications. One instance of graphical models that are inherently symmetric are statistical relational models. These have recently gained attraction within the machine learning and AI communities and combine probability theory with first-order logic, thereby allowing for an efficient representation of structured relational domains. The provided formalisms to compactly represent complex real-world domains enable us to effectively describe large problem instances. Inference within and training of graphical models, however, have not been able to keep pace with the increased representational power. This thesis tackles two major aspects of graphical models and shows that both inference and training can indeed benefit from exploiting symmetries. It first deals with efficient inference exploiting symmetries in graphical models for various query types. We introduce lifted loopy belief propagation (lifted LBP), the first lifted parallel inference approach for relational as well as propositional graphical models. Lifted LBP can effectively speed up marginal inference, but cannot straightforwardly be applied to other types of queries. Thus we also demonstrate efficient lifted algorithms for MAP inference and higher order marginals, as well as the efficient handling of multiple inference tasks. Then we turn to the training of graphical models and introduce the first lifted online training for relational models. Our training procedure and the MapReduce lifting for loopy belief propagation combine lifting with the traditional statistical approaches to scaling, thereby bridging the gap between statistical relational learning and traditional statistical machine learning
    • …
    corecore