134 research outputs found
Reliable Uncertain Evidence Modeling in Bayesian Networks by Credal Networks
A reliable modeling of uncertain evidence in Bayesian networks based on a
set-valued quantification is proposed. Both soft and virtual evidences are
considered. We show that evidence propagation in this setup can be reduced to
standard updating in an augmented credal network, equivalent to a set of
consistent Bayesian networks. A characterization of the computational
complexity for this task is derived together with an efficient exact procedure
for a subclass of instances. In the case of multiple uncertain evidences over
the same variable, the proposed procedure can provide a set-valued version of
the geometric approach to opinion pooling.Comment: 19 page
Credal Networks under Epistemic Irrelevance
A credal network under epistemic irrelevance is a generalised type of
Bayesian network that relaxes its two main building blocks. On the one hand,
the local probabilities are allowed to be partially specified. On the other
hand, the assessments of independence do not have to hold exactly.
Conceptually, these two features turn credal networks under epistemic
irrelevance into a powerful alternative to Bayesian networks, offering a more
flexible approach to graph-based multivariate uncertainty modelling. However,
in practice, they have long been perceived as very hard to work with, both
theoretically and computationally.
The aim of this paper is to demonstrate that this perception is no longer
justified. We provide a general introduction to credal networks under epistemic
irrelevance, give an overview of the state of the art, and present several new
theoretical results. Most importantly, we explain how these results can be
combined to allow for the design of recursive inference methods. We provide
numerous concrete examples of how this can be achieved, and use these to
demonstrate that computing with credal networks under epistemic irrelevance is
most definitely feasible, and in some cases even highly efficient. We also
discuss several philosophical aspects, including the lack of symmetry, how to
deal with probability zero, the interpretation of lower expectations, the
axiomatic status of graphoid properties, and the difference between updating
and conditioning
Generalized belief change with imprecise probabilities and graphical models
We provide a theoretical investigation of probabilistic belief revision in complex frameworks, under extended conditions of uncertainty, inconsistency and imprecision. We motivate our kinematical approach by specializing our discussion to probabilistic reasoning with graphical models, whose modular representation allows for efficient inference. Most results in this direction are derived from the relevant work of Chan and Darwiche (2005), that first proved the inter-reducibility of virtual and probabilistic evidence. Such forms of information, deeply distinct in their meaning, are extended to the conditional and imprecise frameworks, allowing further generalizations, e.g. to experts' qualitative assessments. Belief aggregation and iterated revision of a rational agent's belief are also explored
Epistemic irrelevance in credal networks : the case of imprecise Markov trees
We replace strong independence in credal networks with the weaker notion of epistemic irrelevance. Focusing on directed trees, we show how to combine local credal sets into a global model, and we use this to construct and justify an exact message-passing algorithm that computes updated beliefs for a variable in the tree. The algorithm, which is essentially linear in the number of nodes, is formulated entirely in terms of coherent lower previsions. We supply examples of the algorithm's operation, and report an application to on-line character recognition that illustrates the advantages of our model for prediction
Kuznetsov independence for interval-valued expectations and sets of probability distributions: Properties and algorithms
Kuznetsov independence of variables X and Y means that, for any pair of bounded functions f(X)f(X) and g(Y)g(Y), E[f(X)g(Y)]=E[f(X)]⊠E[g(Y)]E[f(X)g(Y)]=E[f(X)]⊠E[g(Y)], where E[⋅]E[⋅] denotes interval-valued expectation and ⊠ denotes interval multiplication. We present properties of Kuznetsov independence for several variables, and connect it with other concepts of independence in the literature; in particular we show that strong extensions are always included in sets of probability distributions whose lower and upper expectations satisfy Kuznetsov independence. We introduce an algorithm that computes lower expectations subject to judgments of Kuznetsov independence by mixing column generation techniques with nonlinear programming. Finally, we define a concept of conditional Kuznetsov independence, and study its graphoid properties.ThefirstauthorhasbeenpartiallysupportedbyCNPq,andthisworkhasbeensupportedbyFAPESPthroughgrant04/09568-0.ThesecondauthorhasbeenpartiallysupportedbytheHaslerFoundationgrantno.10030
- …