226 research outputs found
On coherent immediate prediction: connecting two theories of imprecise probability
We give an overview of two approaches to probabiliity theory where lower and upper probabilities, rather than probabilities, are used: Walley's behavioural theory of imprecise probabilities, and Shafer and Vovk's game-theoretic account of probability. We show that the two theories are more closely related than would be suspected at first sight, and we establish a correspondence between them that (i) has an interesting interpretation, and (ii) allows us to freely import results from one theory into the other. Our approach leads to an account of immediate prediction in the framework of Walley's theory, and we prove an interesting and quite general version of the weak law of large numbers
Exchangeability and sets of desirable gambles
Sets of desirable gambles constitute a quite general type of uncertainty
model with an interesting geometrical interpretation. We give a general
discussion of such models and their rationality criteria. We study
exchangeability assessments for them, and prove counterparts of de Finetti's
finite and infinite representation theorems. We show that the finite
representation in terms of count vectors has a very nice geometrical
interpretation, and that the representation in terms of frequency vectors is
tied up with multivariate Bernstein (basis) polynomials. We also lay bare the
relationships between the representations of updated exchangeable models, and
discuss conservative inference (natural extension) under exchangeability and
the extension of exchangeable sequences.Comment: 40 page
Exchangeability for sets of desirable gambles
Sets of desirable gambles constitute a quite general type of uncertainty model with an interesting geometrical interpretation. We study exchangeability assessments for such models, and prove a counterpart of de Finetti's finite representation theorem. We show that this representation theorem has a very nice geometrical interpretation. We also lay bare the relationships between the representations of updated exchangeable models, and discuss conservative inference (natural extension) under exchangeability
Epistemic irrelevance in credal networks : the case of imprecise Markov trees
We replace strong independence in credal networks with the weaker notion of epistemic irrelevance. Focusing on directed trees, we show how to combine local credal sets into a global model, and we use this to construct and justify an exact message-passing algorithm that computes updated beliefs for a variable in the tree. The algorithm, which is essentially linear in the number of nodes, is formulated entirely in terms of coherent lower previsions. We supply examples of the algorithm's operation, and report an application to on-line character recognition that illustrates the advantages of our model for prediction
Epistemic irrelevance in credal nets: the case of imprecise Markov trees
We focus on credal nets, which are graphical models that generalise Bayesian
nets to imprecise probability. We replace the notion of strong independence
commonly used in credal nets with the weaker notion of epistemic irrelevance,
which is arguably more suited for a behavioural theory of probability. Focusing
on directed trees, we show how to combine the given local uncertainty models in
the nodes of the graph into a global model, and we use this to construct and
justify an exact message-passing algorithm that computes updated beliefs for a
variable in the tree. The algorithm, which is linear in the number of nodes, is
formulated entirely in terms of coherent lower previsions, and is shown to
satisfy a number of rationality requirements. We supply examples of the
algorithm's operation, and report an application to on-line character
recognition that illustrates the advantages of our approach for prediction. We
comment on the perspectives, opened by the availability, for the first time, of
a truly efficient algorithm based on epistemic irrelevance.Comment: 29 pages, 5 figures, 1 tabl
Marginal extension in the theory of coherent lower previsions
AbstractWe generalise Walley’s Marginal Extension Theorem to the case of any finite number of conditional lower previsions. Unlike the procedure of natural extension, our marginal extension always provides the smallest (most conservative) coherent extensions. We show that they can also be calculated as lower envelopes of marginal extensions of conditional linear (precise) previsions. Finally, we use our version of the theorem to study the so-called forward irrelevant product and forward irrelevant natural extension of a number of marginal lower previsions
Updating beliefs with incomplete observations
Currently, there is renewed interest in the problem, raised by Shafer in
1985, of updating probabilities when observations are incomplete. This is a
fundamental problem in general, and of particular interest for Bayesian
networks. Recently, Grunwald and Halpern have shown that commonly used updating
strategies fail in this case, except under very special assumptions. In this
paper we propose a new method for updating probabilities with incomplete
observations. Our approach is deliberately conservative: we make no assumptions
about the so-called incompleteness mechanism that associates complete with
incomplete observations. We model our ignorance about this mechanism by a
vacuous lower prevision, a tool from the theory of imprecise probabilities, and
we use only coherence arguments to turn prior into posterior probabilities. In
general, this new approach to updating produces lower and upper posterior
probabilities and expectations, as well as partially determinate decisions.
This is a logical consequence of the existing ignorance about the
incompleteness mechanism. We apply the new approach to the problem of
classification of new evidence in probabilistic expert systems, where it leads
to a new, so-called conservative updating rule. In the special case of Bayesian
networks constructed using expert knowledge, we provide an exact algorithm for
classification based on our updating rule, which has linear-time complexity for
a class of networks wider than polytrees. This result is then extended to the
more general framework of credal networks, where computations are often much
harder than with Bayesian nets. Using an example, we show that our rule appears
to provide a solid basis for reliable updating with incomplete observations,
when no strong assumptions about the incompleteness mechanism are justified.Comment: Replaced with extended versio
- …