481 research outputs found
A Recursive Algorithm for Computing Inferences in Imprecise Markov Chains
We present an algorithm that can efficiently compute a broad class of
inferences for discrete-time imprecise Markov chains, a generalised type of
Markov chains that allows one to take into account partially specified
probabilities and other types of model uncertainty. The class of inferences
that we consider contains, as special cases, tight lower and upper bounds on
expected hitting times, on hitting probabilities and on expectations of
functions that are a sum or product of simpler ones. Our algorithm exploits the
specific structure that is inherent in all these inferences: they admit a
general recursive decomposition. This allows us to achieve a computational
complexity that scales linearly in the number of time points on which the
inference depends, instead of the exponential scaling that is typical for a
naive approach
Credal Networks under Epistemic Irrelevance
A credal network under epistemic irrelevance is a generalised type of
Bayesian network that relaxes its two main building blocks. On the one hand,
the local probabilities are allowed to be partially specified. On the other
hand, the assessments of independence do not have to hold exactly.
Conceptually, these two features turn credal networks under epistemic
irrelevance into a powerful alternative to Bayesian networks, offering a more
flexible approach to graph-based multivariate uncertainty modelling. However,
in practice, they have long been perceived as very hard to work with, both
theoretically and computationally.
The aim of this paper is to demonstrate that this perception is no longer
justified. We provide a general introduction to credal networks under epistemic
irrelevance, give an overview of the state of the art, and present several new
theoretical results. Most importantly, we explain how these results can be
combined to allow for the design of recursive inference methods. We provide
numerous concrete examples of how this can be achieved, and use these to
demonstrate that computing with credal networks under epistemic irrelevance is
most definitely feasible, and in some cases even highly efficient. We also
discuss several philosophical aspects, including the lack of symmetry, how to
deal with probability zero, the interpretation of lower expectations, the
axiomatic status of graphoid properties, and the difference between updating
and conditioning
Epistemic irrelevance in credal networks : the case of imprecise Markov trees
We replace strong independence in credal networks with the weaker notion of epistemic irrelevance. Focusing on directed trees, we show how to combine local credal sets into a global model, and we use this to construct and justify an exact message-passing algorithm that computes updated beliefs for a variable in the tree. The algorithm, which is essentially linear in the number of nodes, is formulated entirely in terms of coherent lower previsions. We supply examples of the algorithm's operation, and report an application to on-line character recognition that illustrates the advantages of our model for prediction
Epistemic irrelevance in credal nets: the case of imprecise Markov trees
We focus on credal nets, which are graphical models that generalise Bayesian
nets to imprecise probability. We replace the notion of strong independence
commonly used in credal nets with the weaker notion of epistemic irrelevance,
which is arguably more suited for a behavioural theory of probability. Focusing
on directed trees, we show how to combine the given local uncertainty models in
the nodes of the graph into a global model, and we use this to construct and
justify an exact message-passing algorithm that computes updated beliefs for a
variable in the tree. The algorithm, which is linear in the number of nodes, is
formulated entirely in terms of coherent lower previsions, and is shown to
satisfy a number of rationality requirements. We supply examples of the
algorithm's operation, and report an application to on-line character
recognition that illustrates the advantages of our approach for prediction. We
comment on the perspectives, opened by the availability, for the first time, of
a truly efficient algorithm based on epistemic irrelevance.Comment: 29 pages, 5 figures, 1 tabl
Imprecise Continuous-Time Markov Chains
Continuous-time Markov chains are mathematical models that are used to
describe the state-evolution of dynamical systems under stochastic uncertainty,
and have found widespread applications in various fields. In order to make
these models computationally tractable, they rely on a number of assumptions
that may not be realistic for the domain of application; in particular, the
ability to provide exact numerical parameter assessments, and the applicability
of time-homogeneity and the eponymous Markov property. In this work, we extend
these models to imprecise continuous-time Markov chains (ICTMC's), which are a
robust generalisation that relaxes these assumptions while remaining
computationally tractable.
More technically, an ICTMC is a set of "precise" continuous-time finite-state
stochastic processes, and rather than computing expected values of functions,
we seek to compute lower expectations, which are tight lower bounds on the
expectations that correspond to such a set of "precise" models. Note that, in
contrast to e.g. Bayesian methods, all the elements of such a set are treated
on equal grounds; we do not consider a distribution over this set.
The first part of this paper develops a formalism for describing
continuous-time finite-state stochastic processes that does not require the
aforementioned simplifying assumptions. Next, this formalism is used to
characterise ICTMC's and to investigate their properties. The concept of lower
expectation is then given an alternative operator-theoretic characterisation,
by means of a lower transition operator, and the properties of this operator
are investigated as well. Finally, we use this lower transition operator to
derive tractable algorithms (with polynomial runtime complexity w.r.t. the
maximum numerical error) for computing the lower expectation of functions that
depend on the state at any finite number of time points
Randomness is inherently imprecise
We use the martingale-theoretic approach of game-theoretic probability to
incorporate imprecision into the study of randomness. In particular, we define
several notions of randomness associated with interval, rather than precise,
forecasting systems, and study their properties. The richer mathematical
structure that thus arises lets us, amongst other things, better understand and
place existing results for the precise limit. When we focus on constant
interval forecasts, we find that every sequence of binary outcomes has an
associated filter of intervals for which it is random. It may happen that none
of these intervals is precise -- a single real number -- which justifies the
title of this paper. We illustrate this by showing that randomness associated
with non-stationary precise forecasting systems can be captured by a constant
interval forecast, which must then be less precise: a gain in model simplicity
is thus paid for by a loss in precision. But imprecise randomness cannot always
be explained away as a result of oversimplification: we show that there are
sequences that are random for a constant interval forecast, but never random
for any computable (more) precise forecasting system. We also show that the set
of sequences that are random for a non-vacuous interval forecasting system is
meagre, as it is for precise forecasting systems.Comment: 49 pages, 7 figures. arXiv admin note: text overlap with
arXiv:1703.0093
- …