184 research outputs found
A Probabilistic Reasoning Environment
A framework is presented for a computational theory of probabilistic
argument. The Probabilistic Reasoning Environment encodes knowledge at three
levels. At the deepest level are a set of schemata encoding the system's domain
knowledge. This knowledge is used to build a set of second-level arguments,
which are structured for efficient recapture of the knowledge used to construct
them. Finally, at the top level is a Bayesian network constructed from the
arguments. The system is designed to facilitate not just propagation of beliefs
and assimilation of evidence, but also the dynamic process of constructing a
belief network, evaluating its adequacy, and revising it when necessary.Comment: Appears in Proceedings of the Sixth Conference on Uncertainty in
Artificial Intelligence (UAI1990
Conflict and Surprise: Heuristics for Model Revision
Any probabilistic model of a problem is based on assumptions which, if
violated, invalidate the model. Users of probability based decision aids need
to be alerted when cases arise that are not covered by the aid's model.
Diagnosis of model failure is also necessary to control dynamic model
construction and revision. This paper presents a set of decision theoretically
motivated heuristics for diagnosing situations in which a model is likely to
provide an inadequate representation of the process being modeled.Comment: Appears in Proceedings of the Seventh Conference on Uncertainty in
Artificial Intelligence (UAI1991
Sensitivity Analysis for Probability Assessments in Bayesian Networks
When eliciting probability models from experts, knowledge engineers may
compare the results of the model with expert judgment on test scenarios, then
adjust model parameters to bring the behavior of the model more in line with
the expert's intuition. This paper presents a methodology for analytic
computation of sensitivity values to measure the impact of small changes in a
network parameter on a target probability value or distribution. These values
can be used to guide knowledge elicitation. They can also be used in a gradient
descent algorithm to estimate parameter values that maximize a measure of
goodness-of-fit to both local and holistic probability assessments.Comment: Appears in Proceedings of the Ninth Conference on Uncertainty in
Artificial Intelligence (UAI1993
Bayesian Learning of Loglinear Models for Neural Connectivity
This paper presents a Bayesian approach to learning the connectivity
structure of a group of neurons from data on configuration frequencies. A major
objective of the research is to provide statistical tools for detecting changes
in firing patterns with changing stimuli. Our framework is not restricted to
the well-understood case of pair interactions, but generalizes the Boltzmann
machine model to allow for higher order interactions. The paper applies a
Markov Chain Monte Carlo Model Composition (MC3) algorithm to search over
connectivity structures and uses Laplace's method to approximate posterior
probabilities of structures. Performance of the methods was tested on synthetic
data. The models were also applied to data obtained by Vaadia on multi-unit
recordings of several neurons in the visual cortex of a rhesus monkey in two
different attentional states. Results confirmed the experimenters' conjecture
that different attentional states were associated with different interaction
structures.Comment: Appears in Proceedings of the Twelfth Conference on Uncertainty in
Artificial Intelligence (UAI1996
Network Fragments: Representing Knowledge for Constructing Probabilistic Models
In most current applications of belief networks, domain knowledge is
represented by a single belief network that applies to all problem instances in
the domain. In more complex domains, problem-specific models must be
constructed from a knowledge base encoding probabilistic relationships in the
domain. Most work in knowledge-based model construction takes the rule as the
basic unit of knowledge. We present a knowledge representation framework that
permits the knowledge base designer to specify knowledge in larger semantically
meaningful units which we call network fragments. Our framework provides for
representation of asymmetric independence and canonical intercausal
interaction. We discuss the combination of network fragments to form
problem-specific models to reason about particular problem instances. The
framework is illustrated using examples from the domain of military situation
awareness.Comment: Appears in Proceedings of the Thirteenth Conference on Uncertainty in
Artificial Intelligence (UAI1997
Network Engineering for Complex Belief Networks
Like any large system development effort, the construction of a complex
belief network model requires systems engineering to manage the design and
construction process. We propose a rapid prototyping approach to network
engineering. We describe criteria for identifying network modules and the use
of "stubs" to represent not-yet-constructed modules. We propose an object
oriented representation for belief networks which captures the semantics of the
problem in addition to conditional independencies and probabilities. Methods
for evaluating complex belief network models are discussed. The ideas are
illustrated with examples from a large belief network construction problem in
the military intelligence domain.Comment: Appears in Proceedings of the Twelfth Conference on Uncertainty in
Artificial Intelligence (UAI1996
An Application of Uncertain Reasoning to Requirements Engineering
This paper examines the use of Bayesian Networks to tackle one of the tougher
problems in requirements engineering, translating user requirements into system
requirements. The approach taken is to model domain knowledge as Bayesian
Network fragments that are glued together to form a complete view of the domain
specific system requirements. User requirements are introduced as evidence and
the propagation of belief is used to determine what are the appropriate system
requirements as indicated by user requirements. This concept has been
demonstrated in the development of a system specification and the results are
presented here.Comment: Appears in Proceedings of the Fifteenth Conference on Uncertainty in
Artificial Intelligence (UAI1999
Of Starships and Klingons: Bayesian Logic for the 23rd Century
Intelligent systems in an open world must reason about many interacting
entities related to each other in diverse ways and having uncertain features
and relationships. Traditional probabilistic languages lack the expressive
power to handle relational domains. Classical first-order logic is sufficiently
expressive, but lacks a coherent plausible reasoning capability. Recent years
have seen the emergence of a variety of approaches to integrating first-order
logic, probability, and machine learning. This paper presents Multi-entity
Bayesian networks (MEBN), a formal system that integrates First Order Logic
(FOL) with Bayesian probability theory. MEBN extends ordinary Bayesian networks
to allow representation of graphical models with repeated sub-structures, and
can express a probability distribution over models of any consistent, finitely
axiomatizable first-order theory. We present the logic using an example
inspired by the Paramount Series StarTrek.Comment: Appears in Proceedings of the Twenty-First Conference on Uncertainty
in Artificial Intelligence (UAI2005
Constructing Situation Specific Belief Networks
This paper describes a process for constructing situation-specific belief
networks from a knowledge base of network fragments. A situation-specific
network is a minimal query complete network constructed from a knowledge base
in response to a query for the probability distribution on a set of target
variables given evidence and context variables. We present definitions of query
completeness and situation-specific networks. We describe conditions on the
knowledge base that guarantee query completeness. The relationship of our work
to earlier work on KBMC is also discussed.Comment: Appears in Proceedings of the Fourteenth Conference on Uncertainty in
Artificial Intelligence (UAI1998
Representing and Combining Partially Specified CPTs
This paper extends previous work with network fragments and
situation-specific network construction. We formally define the asymmetry
network, an alternative representation for a conditional probability table. We
also present an object-oriented representation for partially specified
asymmetry networks. We show that the representation is parsimonious. We define
an algebra for the elements of the representation that allows us to 'factor'
any CPT and to soundly combine the partially specified asymmetry networks.Comment: Appears in Proceedings of the Fifteenth Conference on Uncertainty in
Artificial Intelligence (UAI1999
- …