122 research outputs found
DAVID: Influence Diagram Processing System for the Macintosh
Influence diagrams are a directed graph representation for uncertainties as
probabilities. The graph distinguishes between those variables which are under
the control of a decision maker (decisions, shown as rectangles) and those
which are not (chances, shown as ovals), as well as explicitly denoting a goal
for solution (value, shown as a rounded rectangle.Comment: Appears in Proceedings of the Second Conference on Uncertainty in
Artificial Intelligence (UAI1986
Integrating Logical and Probabilistic Reasoning for Decision Making
We describe a representation and a set of inference methods that combine
logic programming techniques with probabilistic network representations for
uncertainty (influence diagrams). The techniques emphasize the dynamic
construction and solution of probabilistic and decision-theoretic models for
complex and uncertain domains. Given a query, a logical proof is produced if
possible; if not, an influence diagram based on the query and the knowledge of
the decision domain is produced and subsequently solved. A uniform declarative,
first-order, knowledge representation is combined with a set of integrated
inference procedures for logical, probabilistic, and decision-theoretic
reasoning.Comment: Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987
A Method for Using Belief Networks as Influence Diagrams
This paper demonstrates a method for using belief-network algorithms to solve
influence diagram problems. In particular, both exact and approximation
belief-network algorithms may be applied to solve influence-diagram problems.
More generally, knowing the relationship between belief-network and
influence-diagram problems may be useful in the design and development of more
efficient influence diagram algorithms.Comment: Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988
Causal Networks: Semantics and Expressiveness
Dependency knowledge of the form "x is independent of y once z is known"
invariably obeys the four graphoid axioms, examples include probabilistic and
database dependencies. Often, such knowledge can be represented efficiently
with graphical structures such as undirected graphs and directed acyclic graphs
(DAGs). In this paper we show that the graphical criterion called d-separation
is a sound rule for reading independencies from any DAG based on a causal input
list drawn from a graphoid. The rule may be extended to cover DAGs that
represent functional dependencies as well as conditional dependencies.Comment: Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988
ARCO1: An Application of Belief Networks to the Oil Market
Belief networks are a new, potentially important, class of knowledge-based
models. ARCO1, currently under development at the Atlantic Richfield Company
(ARCO) and the University of Southern California (USC), is the most advanced
reported implementation of these models in a financial forecasting setting.
ARCO1's underlying belief network models the variables believed to have an
impact on the crude oil market. A pictorial market model-developed on a MAC II-
facilitates consensus among the members of the forecasting team. The system
forecasts crude oil prices via Monte Carlo analyses of the network. Several
different models of the oil market have been developed; the system's ability to
be updated quickly highlights its flexibility.Comment: Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991
Computation of Variances in Causal Networks
The causal (belief) network is a well-known graphical structure for
representing independencies in a joint probability distribution. The exact
methods and the approximation methods, which perform probabilistic inference in
causal networks, often treat the conditional probabilities which are stored in
the network as certain values. However, if one takes either a subjectivistic or
a limiting frequency approach to probability, one can never be certain of
probability values. An algorithm for probabilistic inference should not only be
capable of reporting the inferred probabilities; it should also be capable of
reporting the uncertainty in these probabilities relative to the uncertainty in
the probabilities which are stored in the network. In section 2 of this paper a
method is given for determining the prior variances of the probabilities of all
the nodes. Section 3 contains an approximation method for determining the
variances in inferred probabilities.Comment: Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990
Using Potential Influence Diagrams for Probabilistic Inference and Decision Making
The potential influence diagram is a generalization of the standard
"conditional" influence diagram, a directed network representation for
probabilistic inference and decision analysis [Ndilikilikesha, 1991]. It allows
efficient inference calculations corresponding exactly to those on undirected
graphs. In this paper, we explore the relationship between potential and
conditional influence diagrams and provide insight into the properties of the
potential influence diagram. In particular, we show how to convert a potential
influence diagram into a conditional influence diagram, and how to view the
potential influence diagram operations in terms of the conditional influence
diagram.Comment: Appears in Proceedings of the Ninth Conference on Uncertainty in
Artificial Intelligence (UAI1993
Dynamic Network Updating Techniques For Diagnostic Reasoning
A new probabilistic network construction system, DYNASTY, is proposed for
diagnostic reasoning given variables whose probabilities change over time.
Diagnostic reasoning is formulated as a sequential stochastic process, and is
modeled using influence diagrams. Given a set O of observations, DYNASTY
creates an influence diagram in order to devise the best action given O.
Sensitivity analyses are conducted to determine if the best network has been
created, given the uncertainty in network parameters and topology. DYNASTY uses
an equivalence class approach to provide decision thresholds for the
sensitivity analysis. This equivalence-class approach to diagnostic reasoning
differentiates diagnoses only if the required actions are different. A set of
network-topology updating algorithms are proposed for dynamically updating the
network when necessary.Comment: Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991
Mixtures of Gaussians and Minimum Relative Entropy Techniques for Modeling Continuous Uncertainties
Problems of probabilistic inference and decision making under uncertainty
commonly involve continuous random variables. Often these are discretized to a
few points, to simplify assessments and computations. An alternative
approximation is to fit analytically tractable continuous probability
distributions. This approach has potential simplicity and accuracy advantages,
especially if variables can be transformed first. This paper shows how a
minimum relative entropy criterion can drive both transformation and fitting,
illustrating with a power and logarithm family of transformations and mixtures
of Gaussian (normal) distributions, which allow use of efficient influence
diagram methods. The fitting procedure in this case is the well-known EM
algorithm. The selection of the number of components in a fitted mixture
distribution is automated with an objective that trades off accuracy and
computational cost.Comment: Appears in Proceedings of the Ninth Conference on Uncertainty in
Artificial Intelligence (UAI1993
A Graph-Based Inference Method for Conditional Independence
The graphoid axioms for conditional independence, originally described by
Dawid [1979], are fundamental to probabilistic reasoning [Pearl, 19881. Such
axioms provide a mechanism for manipulating conditional independence assertions
without resorting to their numerical definition. This paper explores a
representation for independence statements using multiple undirected graphs and
some simple graphical transformations. The independence statements derivable in
this system are equivalent to those obtainable by the graphoid axioms.
Therefore, this is a purely graphical proof technique for conditional
independence.Comment: Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991
- …