17,382 research outputs found
Probabilistic Inference in Influence Diagrams
This paper is about reducing influence diagram (ID) evaluation into Bayesian
network (BN) inference problems. Such reduction is interesting because it
enables one to readily use one's favorite BN inference algorithm to efficiently
evaluate IDs. Two such reduction methods have been proposed previously (Cooper
1988, Shachter and Peot 1992). This paper proposes a new method. The BN
inference problems induced by the mew method are much easier to solve than
those induced by the two previous methods.Comment: Appears in Proceedings of the Fourteenth Conference on Uncertainty in
Artificial Intelligence (UAI1998
Integrating Logical and Probabilistic Reasoning for Decision Making
We describe a representation and a set of inference methods that combine
logic programming techniques with probabilistic network representations for
uncertainty (influence diagrams). The techniques emphasize the dynamic
construction and solution of probabilistic and decision-theoretic models for
complex and uncertain domains. Given a query, a logical proof is produced if
possible; if not, an influence diagram based on the query and the knowledge of
the decision domain is produced and subsequently solved. A uniform declarative,
first-order, knowledge representation is combined with a set of integrated
inference procedures for logical, probabilistic, and decision-theoretic
reasoning.Comment: Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987
Using Potential Influence Diagrams for Probabilistic Inference and Decision Making
The potential influence diagram is a generalization of the standard
"conditional" influence diagram, a directed network representation for
probabilistic inference and decision analysis [Ndilikilikesha, 1991]. It allows
efficient inference calculations corresponding exactly to those on undirected
graphs. In this paper, we explore the relationship between potential and
conditional influence diagrams and provide insight into the properties of the
potential influence diagram. In particular, we show how to convert a potential
influence diagram into a conditional influence diagram, and how to view the
potential influence diagram operations in terms of the conditional influence
diagram.Comment: Appears in Proceedings of the Ninth Conference on Uncertainty in
Artificial Intelligence (UAI1993
Interval Influence Diagrams
We describe a mechanism for performing probabilistic reasoning in influence
diagrams using interval rather than point valued probabilities. We derive the
procedures for node removal (corresponding to conditional expectation) and arc
reversal (corresponding to Bayesian conditioning) in influence diagrams where
lower bounds on probabilities are stored at each node. The resulting bounds for
the transformed diagram are shown to be optimal within the class of constraints
on probability distributions that can be expressed exclusively as lower bounds
on the component probabilities of the diagram. Sequences of these operations
can be performed to answer probabilistic queries with indeterminacies in the
input and for performing sensitivity analysis on an influence diagram. The
storage requirements and computational complexity of this approach are
comparable to those for point-valued probabilistic inference mechanisms, making
the approach attractive for performing sensitivity analysis and where
probability information is not available. Limited empirical data on an
implementation of the methodology are provided.Comment: Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989
Evaluating influence diagrams with decision circuits
Although a number of related algorithms have been developed to evaluate
influence diagrams, exploiting the conditional independence in the diagram, the
exact solution has remained intractable for many important problems. In this
paper we introduce decision circuits as a means to exploit the local structure
usually found in decision problems and to improve the performance of influence
diagram analysis. This work builds on the probabilistic inference algorithms
using arithmetic circuits to represent Bayesian belief networks [Darwiche,
2003]. Once compiled, these arithmetic circuits efficiently evaluate
probabilistic queries on the belief network, and methods have been developed to
exploit both the global and local structure of the network. We show that
decision circuits can be constructed in a similar fashion and promise similar
benefits.Comment: Appears in Proceedings of the Twenty-Third Conference on Uncertainty
in Artificial Intelligence (UAI2007
Directed Reduction Algorithms and Decomposable Graphs
In recent years, there have been intense research efforts to develop
efficient methods for probabilistic inference in probabilistic influence
diagrams or belief networks. Many people have concluded that the best methods
are those based on undirected graph structures, and that those methods are
inherently superior to those based on node reduction operations on the
influence diagram. We show here that these two approaches are essentially the
same, since they are explicitly or implicity building and operating on the same
underlying graphical structures. In this paper we examine those graphical
structures and show how this insight can lead to an improved class of directed
reduction methods.Comment: Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990
A Method for Using Belief Networks as Influence Diagrams
This paper demonstrates a method for using belief-network algorithms to solve
influence diagram problems. In particular, both exact and approximation
belief-network algorithms may be applied to solve influence-diagram problems.
More generally, knowing the relationship between belief-network and
influence-diagram problems may be useful in the design and development of more
efficient influence diagram algorithms.Comment: Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988
Lazy Evaluation of Symmetric Bayesian Decision Problems
Solving symmetric Bayesian decision problems is a computationally intensive
task to perform regardless of the algorithm used. In this paper we propose a
method for improving the efficiency of algorithms for solving Bayesian decision
problems. The method is based on the principle of lazy evaluation - a principle
recently shown to improve the efficiency of inference in Bayesian networks. The
basic idea is to maintain decompositions of potentials and to postpone
computations for as long as possible. The efficiency improvements obtained with
the lazy evaluation based method is emphasized through examples. Finally, the
lazy evaluation based method is compared with the hugin and valuation-based
systems architectures for solving symmetric Bayesian decision problems.Comment: Appears in Proceedings of the Fifteenth Conference on Uncertainty in
Artificial Intelligence (UAI1999
Value of Evidence on Influence Diagrams
In this paper, we introduce evidence propagation operations on influence
diagrams and a concept of value of evidence, which measures the value of
experimentation. Evidence propagation operations are critical for the
computation of the value of evidence, general update and inference operations
in normative expert systems which are based on the influence diagram
(generalized Bayesian network) paradigm. The value of evidence allows us to
compute directly an outcome sensitivity, a value of perfect information and a
value of control which are used in decision analysis (the science of decision
making under uncertainty). More specifically, the outcome sensitivity is the
maximum difference among the values of evidence, the value of perfect
information is the expected value of the values of evidence, and the value of
control is the optimal value of the values of evidence. We also discuss an
implementation and a relative computational efficiency issues related to the
value of evidence and the value of perfect information.Comment: Appears in Proceedings of the Tenth Conference on Uncertainty in
Artificial Intelligence (UAI1994
Probabilistic Selection in AgentSpeak(L)
Agent programming is mostly a symbolic discipline and, as such, draws little
benefits from probabilistic areas as machine learning and graphical models.
However, the greatest objective of agent research is the achievement of
autonomy in dynamical and complex environments --- a goal that implies
embracing uncertainty and therefore the entailed representations, algorithms
and techniques. This paper proposes an innovative and conflict free two layer
approach to agent programming that uses already established methods and tools
from both symbolic and probabilistic artificial intelligence. Moreover, this
framework is illustrated by means of a widely used agent programming example,
GoldMiners.Comment: 8 pages, 3 figure
- …