3,467 research outputs found
A Bayesian Method for Causal Modeling and Discovery Under Selection
This paper describes a Bayesian method for learning causal networks using
samples that were selected in a non-random manner from a population of
interest. Examples of data obtained by non-random sampling include convenience
samples and case-control data in which a fixed number of samples with and
without some condition is collected; such data are not uncommon. The paper
describes a method for combining data under selection with prior beliefs in
order to derive a posterior probability for a model of the causal processes
that are generating the data in the population of interest. The priors include
beliefs about the nature of the non-random sampling procedure. Although exact
application of the method would be computationally intractable for most
realistic datasets, efficient special-case and approximation methods are
discussed. Finally, the paper describes how to combine learning under selection
with previous methods for learning from observational and experimental data
that are obtained on random samples of the population of interest. The net
result is a Bayesian methodology that supports causal modeling and discovery
from a rich mixture of different types of data.Comment: Appears in Proceedings of the Sixteenth Conference on Uncertainty in
Artificial Intelligence (UAI2000
A Method for Using Belief Networks as Influence Diagrams
This paper demonstrates a method for using belief-network algorithms to solve
influence diagram problems. In particular, both exact and approximation
belief-network algorithms may be applied to solve influence-diagram problems.
More generally, knowing the relationship between belief-network and
influence-diagram problems may be useful in the design and development of more
efficient influence diagram algorithms.Comment: Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988
An Evaluation of an Algorithm for Inductive Learning of Bayesian Belief Networks Usin
Bayesian learning of belief networks (BLN) is a method for automatically
constructing belief networks (BNs) from data using search and Bayesian scoring
techniques. K2 is a particular instantiation of the method that implements a
greedy search strategy. To evaluate the accuracy of K2, we randomly generated a
number of BNs and for each of those we simulated data sets. K2 was then used to
induce the generating BNs from the simulated data. We examine the performance
of the program, and the factors that influence it. We also present a simple BN
model, developed from our results, which predicts the accuracy of K2, when
given various characteristics of the data set.Comment: Appears in Proceedings of the Tenth Conference on Uncertainty in
Artificial Intelligence (UAI1994
A Structurally and Temporally Extended Bayesian Belief Network Model: Definitions, Properties, and Modeling Techniques
We developed the language of Modifiable Temporal Belief Networks (MTBNs) as a
structural and temporal extension of Bayesian Belief Networks (BNs) to
facilitate normative temporal and causal modeling under uncertainty. In this
paper we present definitions of the model, its components, and its fundamental
properties. We also discuss how to represent various types of temporal
knowledge, with an emphasis on hybrid temporal-explicit time modeling, dynamic
structures, avoiding causal temporal inconsistencies, and dealing with models
that involve simultaneously actions (decisions) and causal and non-causal
associations. We examine the relationships among BNs, Modifiable Belief
Networks, and MTBNs with a single temporal granularity, and suggest areas of
application suitable to each one of them.Comment: Appears in Proceedings of the Twelfth Conference on Uncertainty in
Artificial Intelligence (UAI1996
A Bayesian Network Scoring Metric That Is Based On Globally Uniform Parameter Priors
We introduce a new Bayesian network (BN) scoring metric called the Global
Uniform (GU) metric. This metric is based on a particular type of default
parameter prior. Such priors may be useful when a BN developer is not willing
or able to specify domain-specific parameter priors. The GU parameter prior
specifies that every prior joint probability distribution P consistent with a
BN structure S is considered to be equally likely. Distribution P is consistent
with S if P includes just the set of independence relations defined by S. We
show that the GU metric addresses some undesirable behavior of the BDeu and K2
Bayesian network scoring metrics, which also use particular forms of default
parameter priors. A closed form formula for computing GU for special classes of
BNs is derived. Efficiently computing GU for an arbitrary BN remains an open
problem.Comment: Appears in Proceedings of the Eighteenth Conference on Uncertainty in
Artificial Intelligence (UAI2002
A Bayesian Network Classifier that Combines a Finite Mixture Model and a Naive Bayes Model
In this paper we present a new Bayesian network model for classification that
combines the naive-Bayes (NB) classifier and the finite-mixture (FM)
classifier. The resulting classifier aims at relaxing the strong assumptions on
which the two component models are based, in an attempt to improve on their
classification performance, both in terms of accuracy and in terms of
calibration of the estimated probabilities. The proposed classifier is obtained
by superimposing a finite mixture model on the set of feature variables of a
naive Bayes model. We present experimental results that compare the predictive
performance on real datasets of the new classifier with the predictive
performance of the NB classifier and the FM classifier.Comment: Appears in Proceedings of the Fifteenth Conference on Uncertainty in
Artificial Intelligence (UAI1999
Updating Probabilities in Multiply-Connected Belief Networks
This paper focuses on probability updates in multiply-connected belief
networks. Pearl has designed the method of conditioning, which enables us to
apply his algorithm for belief updates in singly-connected networks to
multiply-connected belief networks by selecting a loop-cutset for the network
and instantiating these loop-cutset nodes. We discuss conditions that need to
be satisfied by the selected nodes. We present a heuristic algorithm for
finding a loop-cutset that satisfies these conditions.Comment: Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988
A Multivariate Discretization Method for Learning Bayesian Networks from Mixed Data
In this paper we address the problem of discretization in the context of
learning Bayesian networks (BNs) from data containing both continuous and
discrete variables. We describe a new technique for multivariate
discretization, whereby each continuous variable is discretized while taking
into account its interaction with the other variables. The technique is based
on the use of a Bayesian scoring metric that scores the discretization policy
for a continuous variable given a BN structure and the observed data. Since the
metric is relative to the BN structure currently being evaluated, the
discretization of a variable needs to be dynamically adjusted as the BN
structure changes.Comment: Appears in Proceedings of the Fourteenth Conference on Uncertainty in
Artificial Intelligence (UAI1998
Causal Discovery from a Mixture of Experimental and Observational Data
This paper describes a Bayesian method for combining an arbitrary mixture of
observational and experimental data in order to learn causal Bayesian networks.
Observational data are passively observed. Experimental data, such as that
produced by randomized controlled trials, result from the experimenter
manipulating one or more variables (typically randomly) and observing the
states of other variables. The paper presents a Bayesian method for learning
the causal structure and parameters of the underlying causal process that is
generating the data, given that (1) the data contains a mixture of
observational and experimental case records, and (2) the causal process is
modeled as a causal Bayesian network. This learning method was applied using as
input various mixtures of experimental and observational data that were
generated from the ALARM causal Bayesian network. In these experiments, the
absolute and relative quantities of experimental and observational data were
varied systematically. For each of these training datasets, the learning method
was applied to predict the causal structure and to estimate the causal
parameters that exist among randomly selected pairs of nodes in ALARM that are
not confounded. The paper reports how these structure predictions and parameter
estimates compare with the true causal structures and parameters as given by
the ALARM network.Comment: Appears in Proceedings of the Fifteenth Conference on Uncertainty in
Artificial Intelligence (UAI1999
An Empirical Evaluation of a Randomized Algorithm for Probabilistic Inference
In recent years, researchers in decision analysis and artificial intelligence
(Al) have used Bayesian belief networks to build models of expert opinion.
Using standard methods drawn from the theory of computational complexity,
workers in the field have shown that the problem of probabilistic inference in
belief networks is difficult and almost certainly intractable. K N ET, a
software environment for constructing knowledge-based systems within the
axiomatic framework of decision theory, contains a randomized approximation
scheme for probabilistic inference. The algorithm can, in many circumstances,
perform efficient approximate inference in large and richly interconnected
models of medical diagnosis. Unlike previously described stochastic algorithms
for probabilistic inference, the randomized approximation scheme computes a
priori bounds on running time by analyzing the structure and contents of the
belief network. In this article, we describe a randomized algorithm for
probabilistic inference and analyze its performance mathematically. Then, we
devote the major portion of the paper to a discussion of the algorithm's
empirical behavior. The results indicate that the generation of good trials
(that is, trials whose distribution closely matches the true distribution),
rather than the computation of numerous mediocre trials, dominates the
performance of stochastic simulation. Key words: probabilistic inference,
belief networks, stochastic simulation, computational complexity theory,
randomized algorithms.Comment: Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989
- …