888 research outputs found
Efficient computation of updated lower expectations for imprecise continuous-time hidden Markov chains
We consider the problem of performing inference with imprecise
continuous-time hidden Markov chains, that is, imprecise continuous-time Markov
chains that are augmented with random output variables whose distribution
depends on the hidden state of the chain. The prefix `imprecise' refers to the
fact that we do not consider a classical continuous-time Markov chain, but
replace it with a robust extension that allows us to represent various types of
model uncertainty, using the theory of imprecise probabilities. The inference
problem amounts to computing lower expectations of functions on the state-space
of the chain, given observations of the output variables. We develop and
investigate this problem with very few assumptions on the output variables; in
particular, they can be chosen to be either discrete or continuous random
variables. Our main result is a polynomial runtime algorithm to compute the
lower expectation of functions on the state-space at any given time-point,
given a collection of observations of the output variables
Credal Networks under Epistemic Irrelevance
A credal network under epistemic irrelevance is a generalised type of
Bayesian network that relaxes its two main building blocks. On the one hand,
the local probabilities are allowed to be partially specified. On the other
hand, the assessments of independence do not have to hold exactly.
Conceptually, these two features turn credal networks under epistemic
irrelevance into a powerful alternative to Bayesian networks, offering a more
flexible approach to graph-based multivariate uncertainty modelling. However,
in practice, they have long been perceived as very hard to work with, both
theoretically and computationally.
The aim of this paper is to demonstrate that this perception is no longer
justified. We provide a general introduction to credal networks under epistemic
irrelevance, give an overview of the state of the art, and present several new
theoretical results. Most importantly, we explain how these results can be
combined to allow for the design of recursive inference methods. We provide
numerous concrete examples of how this can be achieved, and use these to
demonstrate that computing with credal networks under epistemic irrelevance is
most definitely feasible, and in some cases even highly efficient. We also
discuss several philosophical aspects, including the lack of symmetry, how to
deal with probability zero, the interpretation of lower expectations, the
axiomatic status of graphoid properties, and the difference between updating
and conditioning
Imprecise Markov Models for Scalable and Robust Performance Evaluation of Flexi-Grid Spectrum Allocation Policies
The possibility of flexibly assigning spectrum resources with channels of
different sizes greatly improves the spectral efficiency of optical networks,
but can also lead to unwanted spectrum fragmentation.We study this problem in a
scenario where traffic demands are categorised in two types (low or high
bit-rate) by assessing the performance of three allocation policies. Our first
contribution consists of exact Markov chain models for these allocation
policies, which allow us to numerically compute the relevant performance
measures. However, these exact models do not scale to large systems, in the
sense that the computations required to determine the blocking
probabilities---which measure the performance of the allocation
policies---become intractable. In order to address this, we first extend an
approximate reduced-state Markov chain model that is available in the
literature to the three considered allocation policies. These reduced-state
Markov chain models allow us to tractably compute approximations of the
blocking probabilities, but the accuracy of these approximations cannot be
easily verified. Our main contribution then is the introduction of
reduced-state imprecise Markov chain models that allow us to derive guaranteed
lower and upper bounds on blocking probabilities, for the three allocation
policies separately or for all possible allocation policies simultaneously.Comment: 16 pages, 7 figures, 3 table
Epistemic irrelevance in credal nets: the case of imprecise Markov trees
We focus on credal nets, which are graphical models that generalise Bayesian
nets to imprecise probability. We replace the notion of strong independence
commonly used in credal nets with the weaker notion of epistemic irrelevance,
which is arguably more suited for a behavioural theory of probability. Focusing
on directed trees, we show how to combine the given local uncertainty models in
the nodes of the graph into a global model, and we use this to construct and
justify an exact message-passing algorithm that computes updated beliefs for a
variable in the tree. The algorithm, which is linear in the number of nodes, is
formulated entirely in terms of coherent lower previsions, and is shown to
satisfy a number of rationality requirements. We supply examples of the
algorithm's operation, and report an application to on-line character
recognition that illustrates the advantages of our approach for prediction. We
comment on the perspectives, opened by the availability, for the first time, of
a truly efficient algorithm based on epistemic irrelevance.Comment: 29 pages, 5 figures, 1 tabl
Uncertainty in Engineering
This open access book provides an introduction to uncertainty quantification in engineering. Starting with preliminaries on Bayesian statistics and Monte Carlo methods, followed by material on imprecise probabilities, it then focuses on reliability theory and simulation methods for complex systems. The final two chapters discuss various aspects of aerospace engineering, considering stochastic model updating from an imprecise Bayesian perspective, and uncertainty quantification for aerospace flight modelling. Written by experts in the subject, and based on lectures given at the Second Training School of the European Research and Training Network UTOPIAE (Uncertainty Treatment and Optimization in Aerospace Engineering), which took place at Durham University (United Kingdom) from 2 to 6 July 2018, the book offers an essential resource for students as well as scientists and practitioners
Uncertainty in Engineering
This open access book provides an introduction to uncertainty quantification in engineering. Starting with preliminaries on Bayesian statistics and Monte Carlo methods, followed by material on imprecise probabilities, it then focuses on reliability theory and simulation methods for complex systems. The final two chapters discuss various aspects of aerospace engineering, considering stochastic model updating from an imprecise Bayesian perspective, and uncertainty quantification for aerospace flight modelling. Written by experts in the subject, and based on lectures given at the Second Training School of the European Research and Training Network UTOPIAE (Uncertainty Treatment and Optimization in Aerospace Engineering), which took place at Durham University (United Kingdom) from 2 to 6 July 2018, the book offers an essential resource for students as well as scientists and practitioners
Two-state imprecise Markov chains for statistical modelling of two-state non-Markovian processes.
This paper proposes a method for fitting a two-state
imprecise Markov chain to time series data from a twostate
non-Markovian process. Such non-Markovian
processes are common in practical applications. We
focus on how to fit modelling parameters based on
data from a process where time to transition is not exponentially
distributed, thereby violating the Markov
assumption. We do so by first fitting a many-state (i.e.
having more than two states) Markov chain to the
data, through its associated phase-type distribution.
Then, we lump the process to a two-state imprecise
Markov chain. In practical applications, a two-state imprecise
Markov chain might be more convenient than
a many-state Markov chain, as we have closed analytic
expressions for typical quantities of interest (including
the lower and upper expectation of any function of
the state at any point in time). A numerical example
demonstrates how the entire inference process (fitting
and prediction) can be done using Markov chain Monte
Carlo, for a given set of prior distributions on the parameters.
In particular, we numerically identify the set
of posterior densities and posterior lower and upper
expectations on all model parameters and predictive
quantities. We compare our inferences under a range
of sample sizes and model assumptions.
Keywords: imprecise Markov chain, estimation, reliability,
Markov assumption, MCM
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines
Recent studies have shown that synaptic unreliability is a robust and
sufficient mechanism for inducing the stochasticity observed in cortex. Here,
we introduce Synaptic Sampling Machines, a class of neural network models that
uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised
learning. Similar to the original formulation of Boltzmann machines, these
models can be viewed as a stochastic counterpart of Hopfield networks, but
where stochasticity is induced by a random mask over the connections. Synaptic
stochasticity plays the dual role of an efficient mechanism for sampling, and a
regularizer during learning akin to DropConnect. A local synaptic plasticity
rule implementing an event-driven form of contrastive divergence enables the
learning of generative models in an on-line fashion. Synaptic sampling machines
perform equally well using discrete-timed artificial units (as in Hopfield
networks) or continuous-timed leaky integrate & fire neurons. The learned
representations are remarkably sparse and robust to reductions in bit precision
and synapse pruning: removal of more than 75% of the weakest connections
followed by cursory re-learning causes a negligible performance loss on
benchmark classification tasks. The spiking neuron-based synaptic sampling
machines outperform existing spike-based unsupervised learners, while
potentially offering substantial advantages in terms of power and complexity,
and are thus promising models for on-line learning in brain-inspired hardware
- …