39,704 research outputs found
On the Relation between Kappa Calculus and Probabilistic Reasoning
We study the connection between kappa calculus and probabilistic reasoning in
diagnosis applications. Specifically, we abstract a probabilistic belief
network for diagnosing faults into a kappa network and compare the ordering of
faults computed using both methods. We show that, at least for the example
examined, the ordering of faults coincide as long as all the causal relations
in the original probabilistic network are taken into account. We also provide a
formal analysis of some network structures where the two methods will differ.
Both kappa rankings and infinitesimal probabilities have been used extensively
to study default reasoning and belief revision. But little has been done on
utilizing their connection as outlined above. This is partly because the
relation between kappa and probability calculi assumes that probabilities are
arbitrarily close to one (or zero). The experiments in this paper investigate
this relation when this assumption is not satisfied. The reported results have
important implications on the use of kappa rankings to enhance the knowledge
engineering of uncertainty models.Comment: Appears in Proceedings of the Tenth Conference on Uncertainty in
Artificial Intelligence (UAI1994
Human-aided Multi-Entity Bayesian Networks Learning from Relational Data
An Artificial Intelligence (AI) system is an autonomous system which emulates
human mental and physical activities such as Observe, Orient, Decide, and Act,
called the OODA process. An AI system performing the OODA process requires a
semantically rich representation to handle a complex real world situation and
ability to reason under uncertainty about the situation. Multi-Entity Bayesian
Networks (MEBNs) combines First-Order Logic with Bayesian Networks for
representing and reasoning about uncertainty in complex, knowledge-rich
domains. MEBN goes beyond standard Bayesian networks to enable reasoning about
an unknown number of entities interacting with each other in various types of
relationships, a key requirement for the OODA process of an AI system. MEBN
models have heretofore been constructed manually by a domain expert. However,
manual MEBN modeling is labor-intensive and insufficiently agile. To address
these problems, an efficient method is needed for MEBN modeling. One of the
methods is to use machine learning to learn a MEBN model in whole or in part
from data. In the era of Big Data, data-rich environments, characterized by
uncertainty and complexity, have become ubiquitous. The larger the data sample
is, the more accurate the results of the machine learning approach can be.
Therefore, machine learning has potential to improve the quality of MEBN models
as well as the effectiveness for MEBN modeling. In this research, we study a
MEBN learning framework to develop a MEBN model from a combination of domain
expert's knowledge and data. To evaluate the MEBN learning framework, we
conduct an experiment to compare the MEBN learning framework and the existing
manual MEBN modeling in terms of development efficiency
Reasoning From Data in the Mathematical Theory of Evidence
Mathematical Theory of Evidence (MTE) is known as a foundation for reasoning
when knowledge is expressed at various levels of detail. Though much research
effort has been committed to this theory since its foundation, many questions
remain open. One of the most important open questions seems to be the
relationship between frequencies and the Mathematical Theory of Evidence. The
theory is blamed to leave frequencies outside (or aside of) its framework. The
seriousness of this accusation is obvious: no experiment may be run to compare
the performance of MTE-based models of real world processes against real world
data.
In this paper we develop a frequentist model of the MTE bringing to fall the
above argument against MTE. We describe, how to interpret data in terms of MTE
belief functions, how to reason from data about conditional belief functions,
how to generate a random sample out of a MTE model, how to derive MTE model
from data and how to compare results of reasoning in MTE model and reasoning
from data.
It is claimed in this paper that MTE is suitable to model some types of
destructive processesComment: presented as poster M.A. K{\l}opotek: Reasoning from Data in the
Mathematical Theory of Evidence. [in:] Proc. Eighth International Symposium
On Methodologies For Intelligent Systems (ISMIS'94), Charlotte, North
Carolina, USA, October 16-19, 1994. arXiv admin note: text overlap with
arXiv:1707.0388
Inference Networks and the Evaluation of Evidence: Alternative Analyses
Inference networks have a variety of important uses and are constructed by
persons having quite different standpoints. Discussed in this paper are three
different but complementary methods for generating and analyzing probabilistic
inference networks. The first method, though over eighty years old, is very
useful for knowledge representation in the task of constructing probabilistic
arguments. It is also useful as a heuristic device in generating new forms of
evidence. The other two methods are formally equivalent ways for combining
probabilities in the analysis of inference networks. The use of these three
methods is illustrated in an analysis of a mass of evidence in a celebrated
American law case.Comment: Appears in Proceedings of the Fifteenth Conference on Uncertainty in
Artificial Intelligence (UAI1999
An Application of Uncertain Reasoning to Requirements Engineering
This paper examines the use of Bayesian Networks to tackle one of the tougher
problems in requirements engineering, translating user requirements into system
requirements. The approach taken is to model domain knowledge as Bayesian
Network fragments that are glued together to form a complete view of the domain
specific system requirements. User requirements are introduced as evidence and
the propagation of belief is used to determine what are the appropriate system
requirements as indicated by user requirements. This concept has been
demonstrated in the development of a system specification and the results are
presented here.Comment: Appears in Proceedings of the Fifteenth Conference on Uncertainty in
Artificial Intelligence (UAI1999
Of Starships and Klingons: Bayesian Logic for the 23rd Century
Intelligent systems in an open world must reason about many interacting
entities related to each other in diverse ways and having uncertain features
and relationships. Traditional probabilistic languages lack the expressive
power to handle relational domains. Classical first-order logic is sufficiently
expressive, but lacks a coherent plausible reasoning capability. Recent years
have seen the emergence of a variety of approaches to integrating first-order
logic, probability, and machine learning. This paper presents Multi-entity
Bayesian networks (MEBN), a formal system that integrates First Order Logic
(FOL) with Bayesian probability theory. MEBN extends ordinary Bayesian networks
to allow representation of graphical models with repeated sub-structures, and
can express a probability distribution over models of any consistent, finitely
axiomatizable first-order theory. We present the logic using an example
inspired by the Paramount Series StarTrek.Comment: Appears in Proceedings of the Twenty-First Conference on Uncertainty
in Artificial Intelligence (UAI2005
Exploring Localization in Bayesian Networks for Large Expert Systems
Current Bayesian net representations do not consider structure in the domain
and include all variables in a homogeneous network. At any time, a human
reasoner in a large domain may direct his attention to only one of a number of
natural subdomains, i.e., there is ?localization' of queries and evidence. In
such a case, propagating evidence through a homogeneous network is inefficient
since the entire network has to be updated each time. This paper presents
multiply sectioned Bayesian networks that enable a (localization preserving)
representation of natural subdomains by separate Bayesian subnets. The subnets
are transformed into a set of permanent junction trees such that evidential
reasoning takes place at only one of them at a time. Probabilities obtained are
identical to those that would be obtained from the homogeneous network. We
discuss attention shift to a different junction tree and propagation of
previously acquired evidence. Although the overall system can be large,
computational requirements are governed by the size of only one junction tree.Comment: Appears in Proceedings of the Eighth Conference on Uncertainty in
Artificial Intelligence (UAI1992
An Importance Sampling Algorithm Based on Evidence Pre-propagation
Precision achieved by stochastic sampling algorithms for Bayesian networks
typically deteriorates in face of extremely unlikely evidence. To address this
problem, we propose the Evidence Pre-propagation Importance Sampling algorithm
(EPIS-BN), an importance sampling algorithm that computes an approximate
importance function by the heuristic methods: loopy belief Propagation and
e-cutoff. We tested the performance of e-cutoff on three large real Bayesian
networks: ANDES, CPCS, and PATHFINDER. We observed that on each of these
networks the EPIS-BN algorithm gives us a considerable improvement over the
current state of the art algorithm, the AIS-BN algorithm. In addition, it
avoids the costly learning stage of the AIS-BN algorithm.Comment: Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003
Inferring Personalized Bayesian Embeddings for Learning from Heterogeneous Demonstration
For assistive robots and virtual agents to achieve ubiquity, machines will
need to anticipate the needs of their human counterparts. The field of Learning
from Demonstration (LfD) has sought to enable machines to infer predictive
models of human behavior for autonomous robot control. However, humans exhibit
heterogeneity in decision-making, which traditional LfD approaches fail to
capture. To overcome this challenge, we propose a Bayesian LfD framework to
infer an integrated representation of all human task demonstrators by inferring
human-specific embeddings, thereby distilling their unique characteristics. We
validate our approach is able to outperform state-of-the-art techniques on both
synthetic and real-world data sets.Comment: 8 Pages, 7 figure
Computational Advantages of Relevance Reasoning in Bayesian Belief Networks
This paper introduces a computational framework for reasoning in Bayesian
belief networks that derives significant advantages from focused inference and
relevance reasoning. This framework is based on d -separation and other simple
and computationally efficient techniques for pruning irrelevant parts of a
network. Our main contribution is a technique that we call relevance-based
decomposition. Relevance-based decomposition approaches belief updating in
large networks by focusing on their parts and decomposing them into partially
overlapping subnetworks. This makes reasoning in some intractable networks
possible and, in addition, often results in significant speedup, as the total
time taken to update all subnetworks is in practice often considerably less
than the time taken to update the network as a whole. We report results of
empirical tests that demonstrate practical significance of our approach.Comment: Appears in Proceedings of the Thirteenth Conference on Uncertainty in
Artificial Intelligence (UAI1997
- …