2,659 research outputs found
First steps towards an imprecise Poisson process
The Poisson process is the most elementary continuous-time stochastic process that models a stream of repeating events. It is uniquely characterised by a single parameter called the rate. Instead of a single value for this rate, we here consider a rate interval and let it characterise two nested sets of stochastic processes. We call these two sets of stochastic process imprecise Poisson processes, explain why this is justified, and study the corresponding lower and upper (conditional) expectations. Besides a general theoretical framework, we also provide practical methods to compute lower and upper (conditional) expectations of functions that depend on the number of events at a single point in time
Imprecise Continuous-Time Markov Chains
Continuous-time Markov chains are mathematical models that are used to
describe the state-evolution of dynamical systems under stochastic uncertainty,
and have found widespread applications in various fields. In order to make
these models computationally tractable, they rely on a number of assumptions
that may not be realistic for the domain of application; in particular, the
ability to provide exact numerical parameter assessments, and the applicability
of time-homogeneity and the eponymous Markov property. In this work, we extend
these models to imprecise continuous-time Markov chains (ICTMC's), which are a
robust generalisation that relaxes these assumptions while remaining
computationally tractable.
More technically, an ICTMC is a set of "precise" continuous-time finite-state
stochastic processes, and rather than computing expected values of functions,
we seek to compute lower expectations, which are tight lower bounds on the
expectations that correspond to such a set of "precise" models. Note that, in
contrast to e.g. Bayesian methods, all the elements of such a set are treated
on equal grounds; we do not consider a distribution over this set.
The first part of this paper develops a formalism for describing
continuous-time finite-state stochastic processes that does not require the
aforementioned simplifying assumptions. Next, this formalism is used to
characterise ICTMC's and to investigate their properties. The concept of lower
expectation is then given an alternative operator-theoretic characterisation,
by means of a lower transition operator, and the properties of this operator
are investigated as well. Finally, we use this lower transition operator to
derive tractable algorithms (with polynomial runtime complexity w.r.t. the
maximum numerical error) for computing the lower expectation of functions that
depend on the state at any finite number of time points
Computing Inferences for Large-Scale Continuous-Time Markov Chains by Combining Lumping with Imprecision
If the state space of a homogeneous continuous-time Markov chain is too
large, making inferences - here limited to determining marginal or limit
expectations - becomes computationally infeasible. Fortunately, the state space
of such a chain is usually too detailed for the inferences we are interested
in, in the sense that a less detailed - smaller - state space suffices to
unambiguously formalise the inference. However, in general this so-called
lumped state space inhibits computing exact inferences because the
corresponding dynamics are unknown and/or intractable to obtain. We address
this issue by considering an imprecise continuous-time Markov chain. In this
way, we are able to provide guaranteed lower and upper bounds for the
inferences of interest, without suffering from the curse of dimensionality.Comment: 9th International Conference on Soft Methods in Probability and
Statistics (SMPS 2018
Bounding inferences for large-scale continuous-time Markov chains : a new approach based on lumping and imprecise Markov chains
If the state space of a homogeneous continuous-time Markov chain is too large, making inferences becomes computationally infeasible. Fortunately, the state space of such a chain is usually too detailed for the inferences we are interested in, in the sense that a less detailed—smaller—state space suffices to unambiguously formalise the inference. However, in general this so-called lumped state space inhibits computing exact inferences because the corresponding dynamics are unknown and/or intractable to obtain. We address this issue by considering an imprecise continuous-time Markov chain. In this way, we are able to provide guaranteed lower and upper bounds for the inferences of interest, without suffering from the curse of dimensionality
Genetic algorithms with DNN-based trainable crossover as an example of partial specialization of general search
Universal induction relies on some general search procedure that is doomed to
be inefficient. One possibility to achieve both generality and efficiency is to
specialize this procedure w.r.t. any given narrow task. However, complete
specialization that implies direct mapping from the task parameters to
solutions (discriminative models) without search is not always possible. In
this paper, partial specialization of general search is considered in the form
of genetic algorithms (GAs) with a specialized crossover operator. We perform a
feasibility study of this idea implementing such an operator in the form of a
deep feedforward neural network. GAs with trainable crossover operators are
compared with the result of complete specialization, which is also represented
as a deep neural network. Experimental results show that specialized GAs can be
more efficient than both general GAs and discriminative models.Comment: AGI 2017 procedding, The final publication is available at
link.springer.co
Interval reliability inference for multi-component systems
This thesis is a collection of investigations on applications of imprecise probability theory to system reliability engineering with emphasis on using survival signatures for modelling complex systems. Survival signatures provide efficient representation of system structure and facilitate several reliability assessments by separating the computationally expensive combinatorial part from the subsequent evaluations submitted to only polynomial complexity. This proves useful for situations which also account for the statistical inference on system component lifetime distributions where Bayesian methods require repeated numerical propagation for the samples from the posterior distribution. Similarly, statistical methods involving imprecise probabilistic models composed of sets of precise probability distributions also benefit from the simplification by the signature representation. We will argue the pragmatic benefits of using statistical models based on imprecise probability models in reliability engineering from the perspective of inferential validity and provision of objective guarantees for the statistical procedures. Imprecise probability methods generally require solving an optimization problem to obtain bounds on the assessments of interest, but monotone system structures simplify them without much additional complexity. This simplification extends to survival signature models, therefore many reliability assessments with imprecise (interval) component lifetime models tend to be tractable as will be demonstrated on several examples
Credal Networks under Epistemic Irrelevance
A credal network under epistemic irrelevance is a generalised type of
Bayesian network that relaxes its two main building blocks. On the one hand,
the local probabilities are allowed to be partially specified. On the other
hand, the assessments of independence do not have to hold exactly.
Conceptually, these two features turn credal networks under epistemic
irrelevance into a powerful alternative to Bayesian networks, offering a more
flexible approach to graph-based multivariate uncertainty modelling. However,
in practice, they have long been perceived as very hard to work with, both
theoretically and computationally.
The aim of this paper is to demonstrate that this perception is no longer
justified. We provide a general introduction to credal networks under epistemic
irrelevance, give an overview of the state of the art, and present several new
theoretical results. Most importantly, we explain how these results can be
combined to allow for the design of recursive inference methods. We provide
numerous concrete examples of how this can be achieved, and use these to
demonstrate that computing with credal networks under epistemic irrelevance is
most definitely feasible, and in some cases even highly efficient. We also
discuss several philosophical aspects, including the lack of symmetry, how to
deal with probability zero, the interpretation of lower expectations, the
axiomatic status of graphoid properties, and the difference between updating
and conditioning
Robust estimation of risks from small samples
Data-driven risk analysis involves the inference of probability distributions
from measured or simulated data. In the case of a highly reliable system, such
as the electricity grid, the amount of relevant data is often exceedingly
limited, but the impact of estimation errors may be very large. This paper
presents a robust nonparametric Bayesian method to infer possible underlying
distributions. The method obtains rigorous error bounds even for small samples
taken from ill-behaved distributions. The approach taken has a natural
interpretation in terms of the intervals between ordered observations, where
allocation of probability mass across intervals is well-specified, but the
location of that mass within each interval is unconstrained. This formulation
gives rise to a straightforward computational resampling method: Bayesian
Interval Sampling. In a comparison with common alternative approaches, it is
shown to satisfy strict error bounds even for ill-behaved distributions.Comment: 13 pages, 3 figures; supplementary information provided. A revised
version of this manuscript has been accepted for publication in Philosophical
Transactions of the Royal Society A: Mathematical, Physical and Engineering
Science
- …