15,485 research outputs found
Different Approaches on Stochastic Reachability as an Optimal Stopping Problem
Reachability analysis is the core of model checking of time systems. For
stochastic hybrid systems, this safety verification method is very little supported mainly
because of complexity and difficulty of the associated mathematical problems. In this
paper, we develop two main directions of studying stochastic reachability as an optimal
stopping problem. The first approach studies the hypotheses for the dynamic programming
corresponding with the optimal stopping problem for stochastic hybrid systems.
In the second approach, we investigate the reachability problem considering approximations
of stochastic hybrid systems. The main difficulty arises when we have to prove the
convergence of the value functions of the approximating processes to the value function
of the initial process. An original proof is provided
Fast MCMC sampling for Markov jump processes and extensions
Markov jump processes (or continuous-time Markov chains) are a simple and
important class of continuous-time dynamical systems. In this paper, we tackle
the problem of simulating from the posterior distribution over paths in these
models, given partial and noisy observations. Our approach is an auxiliary
variable Gibbs sampler, and is based on the idea of uniformization. This sets
up a Markov chain over paths by alternately sampling a finite set of virtual
jump times given the current path and then sampling a new path given the set of
extant and virtual jump times using a standard hidden Markov model forward
filtering-backward sampling algorithm. Our method is exact and does not involve
approximations like time-discretization. We demonstrate how our sampler extends
naturally to MJP-based models like Markov-modulated Poisson processes and
continuous-time Bayesian networks and show significant computational benefits
over state-of-the-art MCMC samplers for these models.Comment: Accepted at the Journal of Machine Learning Research (JMLR
Some simple but challenging Markov processes
In this note, we present few examples of Piecewise Deterministic Markov
Processes and their long time behavior. They share two important features: they
are related to concrete models (in biology, networks, chemistry,. . .) and they
are mathematically rich. Their math-ematical study relies on coupling method,
spectral decomposition, PDE technics, functional inequalities. We also relate
these simple examples to recent and open problems
Queuing with future information
We study an admissions control problem, where a queue with service rate
receives incoming jobs at rate , and the decision maker is
allowed to redirect away jobs up to a rate of , with the objective of
minimizing the time-average queue length. We show that the amount of
information about the future has a significant impact on system performance, in
the heavy-traffic regime. When the future is unknown, the optimal average queue
length diverges at rate , as . In sharp contrast, when all future arrival and service times are revealed
beforehand, the optimal average queue length converges to a finite constant,
, as . We further show that the finite limit of
can be achieved using only a finite lookahead window starting from the current
time frame, whose length scales as , as
. This leads to the conjecture of an interesting duality between
queuing delay and the amount of information about the future.Comment: Published in at http://dx.doi.org/10.1214/13-AAP973 the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org
The Hitchhiker's Guide to Nonlinear Filtering
Nonlinear filtering is the problem of online estimation of a dynamic hidden
variable from incoming data and has vast applications in different fields,
ranging from engineering, machine learning, economic science and natural
sciences. We start our review of the theory on nonlinear filtering from the
simplest `filtering' task we can think of, namely static Bayesian inference.
From there we continue our journey through discrete-time models, which is
usually encountered in machine learning, and generalize to and further
emphasize continuous-time filtering theory. The idea of changing the
probability measure connects and elucidates several aspects of the theory, such
as the parallels between the discrete- and continuous-time problems and between
different observation models. Furthermore, it gives insight into the
construction of particle filtering algorithms. This tutorial is targeted at
scientists and engineers and should serve as an introduction to the main ideas
of nonlinear filtering, and as a segway to more advanced and specialized
literature.Comment: 64 page
- âŠ