13,895 research outputs found

    From Influence Diagrams to Junction Trees

    Full text link
    We present an approach to the solution of decision problems formulated as influence diagrams. This approach involves a special triangulation of the underlying graph, the construction of a junction tree with special properties, and a message passing algorithm operating on the junction tree for computation of expected utilities and optimal decision policies.Comment: Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994

    Efficient Value of Information Computation

    Full text link
    One of the most useful sensitivity analysis techniques of decision analysis is the computation of value of information (or clairvoyance), the difference in value obtained by changing the decisions by which some of the uncertainties are observed. In this paper, some simple but powerful extensions to previous algorithms are introduced which allow an efficient value of information calculation on the rooted cluster tree (or strong junction tree) used to solve the original decision problem.Comment: Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999

    Lazy Evaluation of Symmetric Bayesian Decision Problems

    Full text link
    Solving symmetric Bayesian decision problems is a computationally intensive task to perform regardless of the algorithm used. In this paper we propose a method for improving the efficiency of algorithms for solving Bayesian decision problems. The method is based on the principle of lazy evaluation - a principle recently shown to improve the efficiency of inference in Bayesian networks. The basic idea is to maintain decompositions of potentials and to postpone computations for as long as possible. The efficiency improvements obtained with the lazy evaluation based method is emphasized through examples. Finally, the lazy evaluation based method is compared with the hugin and valuation-based systems architectures for solving symmetric Bayesian decision problems.Comment: Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999

    Solving Influence Diagrams using HUGIN, Shafer-Shenoy and Lazy Propagation

    Full text link
    In this paper we compare three different architectures for the evaluation of influence diagrams: HUGIN, Shafer-Shenoy, and Lazy Evaluation architecture. The computational complexity of the architectures are compared on the LImited Memory Influence Diagram (LIMID): a diagram where only the requiste information for the computation of the optimal policies are depicted. Because the requsite information is explicitly represented in the LIMID the evaluation can take advantage of it, and significant savings in computational can be obtained. In this paper we show how the obtained savings is considerably increased when the computations performed on the LIMID is according to the Lazy Evaluation scheme.Comment: Appears in Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence (UAI2001

    From influence diagrams to multi-operator cluster DAGs

    Full text link
    There exist several architectures to solve influence diagrams using local computations, such as the Shenoy-Shafer, the HUGIN, or the Lazy Propagation architectures. They all extend usual variable elimination algorithms thanks to the use of so-called 'potentials'. In this paper, we introduce a new architecture, called the Multi-operator Cluster DAG architecture, which can produce decompositions with an improved constrained induced-width, and therefore induce potentially exponential gains. Its principle is to benefit from the composite nature of influence diagrams, instead of using uniform potentials, in order to better analyze the problem structure.Comment: Appears in Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence (UAI2006

    Evaluating Influence Diagrams using LIMIDs

    Full text link
    We present a new approach to the solution of decision problems formulated as influence diagrams. The approach converts the influence diagram into a simpler structure, the LImited Memory Influence Diagram (LIMID), where only the requisite information for the computation of optimal policies is depicted. Because the requisite information is explicitly represented in the diagram, the evaluation procedure can take advantage of it. In this paper we show how to convert an influence diagram to a LIMID and describe the procedure for finding an optimal strategy. Our approach can yield significant savings of memory and computational time when compared to traditional methods.Comment: Appears in Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI2000

    Evaluating influence diagrams with decision circuits

    Full text link
    Although a number of related algorithms have been developed to evaluate influence diagrams, exploiting the conditional independence in the diagram, the exact solution has remained intractable for many important problems. In this paper we introduce decision circuits as a means to exploit the local structure usually found in decision problems and to improve the performance of influence diagram analysis. This work builds on the probabilistic inference algorithms using arithmetic circuits to represent Bayesian belief networks [Darwiche, 2003]. Once compiled, these arithmetic circuits efficiently evaluate probabilistic queries on the belief network, and methods have been developed to exploit both the global and local structure of the network. We show that decision circuits can be constructed in a similar fashion and promise similar benefits.Comment: Appears in Proceedings of the Twenty-Third Conference on Uncertainty in Artificial Intelligence (UAI2007

    Propagation of 2-Monotone Lower Probabilities on an Undirected Graph

    Full text link
    Lower and upper probabilities, also known as Choquet capacities, are widely used as a convenient representation for sets of probability distributions. This paper presents a graphical decomposition and exact propagation algorithm for computing marginal posteriors of 2-monotone lower probabilities (equivalently, 2-alternating upper probabilities).Comment: Appears in Proceedings of the Twelfth Conference on Uncertainty in Artificial Intelligence (UAI1996

    An Empirical Evaluation of Possible Variations of Lazy Propagation

    Full text link
    As real-world Bayesian networks continue to grow larger and more complex, it is important to investigate the possibilities for improving the performance of existing algorithms of probabilistic inference. Motivated by examples, we investigate the dependency of the performance of Lazy propagation on the message computation algorithm. We show how Symbolic Probabilistic Inference (SPI) and Arc-Reversal (AR) can be used for computation of clique to clique messages in the addition to the traditional use of Variable Elimination (VE). In addition, the paper resents the results of an empirical evaluation of the performance of Lazy propagation using VE, SPI, and AR as the message computation algorithm. The results of the empirical evaluation show that for most networks, the performance of inference did not depend on the choice of message computation algorithm, but for some randomly generated networks the choice had an impact on both space and time performance. In the cases where the choice had an impact, AR produced the best results.Comment: Appears in Proceedings of the Twentieth Conference on Uncertainty in Artificial Intelligence (UAI2004

    Approximate Learning in Complex Dynamic Bayesian Networks

    Full text link
    In this paper we extend the work of Smith and Papamichail (1999) and present fast approximate Bayesian algorithms for learning in complex scenarios where at any time frame, the relationships between explanatory state space variables can be described by a Bayesian network that evolve dynamically over time and the observations taken are not necessarily Gaussian. It uses recent developments in approximate Bayesian forecasting methods in combination with more familiar Gaussian propagation algorithms on junction trees. The procedure for learning state parameters from data is given explicitly for common sampling distributions and the methodology is illustrated through a real application. The efficiency of the dynamic approximation is explored by using the Hellinger divergence measure and theoretical bounds for the efficacy of such a procedure are discussed.Comment: Appears in Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI1999
    • …
    corecore