5,726 research outputs found
Optimal control of continuous-time Markov chains with noise-free observation
We consider an infinite horizon optimal control problem for a continuous-time
Markov chain in a finite set with noise-free partial observation. The
observation process is defined as , , where is a
given map defined on . The observation is noise-free in the sense that the
only source of randomness is the process itself. The aim is to minimize a
discounted cost functional and study the associated value function . After
transforming the control problem with partial observation into one with
complete observation (the separated problem) using filtering equations, we
provide a link between the value function associated to the latter control
problem and the original value function . Then, we present two different
characterizations of (and indirectly of ): on one hand as the unique
fixed point of a suitably defined contraction mapping and on the other hand as
the unique constrained viscosity solution (in the sense of Soner) of a HJB
integro-differential equation. Under suitable assumptions, we finally prove the
existence of an optimal control
Approximate Kalman-Bucy filter for continuous-time semi-Markov jump linear systems
The aim of this paper is to propose a new numerical approximation of the
Kalman-Bucy filter for semi-Markov jump linear systems. This approximation is
based on the selection of typical trajectories of the driving semi-Markov chain
of the process by using an optimal quantization technique. The main advantage
of this approach is that it makes pre-computations possible. We derive a
Lipschitz property for the solution of the Riccati equation and a general
result on the convergence of perturbed solutions of semi-Markov switching
Riccati equations when the perturbation comes from the driving semi-Markov
chain. Based on these results, we prove the convergence of our approximation
scheme in a general infinite countable state space framework and derive an
error bound in terms of the quantization error and time discretization step. We
employ the proposed filter in a magnetic levitation example with markovian
failures and compare its performance with both the Kalman-Bucy filter and the
Markovian linear minimum mean squares estimator
Almost Sure Stabilization for Adaptive Controls of Regime-switching LQ Systems with A Hidden Markov Chain
This work is devoted to the almost sure stabilization of adaptive control
systems that involve an unknown Markov chain. The control system displays
continuous dynamics represented by differential equations and discrete events
given by a hidden Markov chain. Different from previous work on stabilization
of adaptive controlled systems with a hidden Markov chain, where average
criteria were considered, this work focuses on the almost sure stabilization or
sample path stabilization of the underlying processes. Under simple conditions,
it is shown that as long as the feedback controls have linear growth in the
continuous component, the resulting process is regular. Moreover, by
appropriate choice of the Lyapunov functions, it is shown that the adaptive
system is stabilizable almost surely. As a by-product, it is also established
that the controlled process is positive recurrent
The Hitchhiker's Guide to Nonlinear Filtering
Nonlinear filtering is the problem of online estimation of a dynamic hidden
variable from incoming data and has vast applications in different fields,
ranging from engineering, machine learning, economic science and natural
sciences. We start our review of the theory on nonlinear filtering from the
simplest `filtering' task we can think of, namely static Bayesian inference.
From there we continue our journey through discrete-time models, which is
usually encountered in machine learning, and generalize to and further
emphasize continuous-time filtering theory. The idea of changing the
probability measure connects and elucidates several aspects of the theory, such
as the parallels between the discrete- and continuous-time problems and between
different observation models. Furthermore, it gives insight into the
construction of particle filtering algorithms. This tutorial is targeted at
scientists and engineers and should serve as an introduction to the main ideas
of nonlinear filtering, and as a segway to more advanced and specialized
literature.Comment: 64 page
A filtering approach to tracking volatility from prices observed at random times
This paper is concerned with nonlinear filtering of the coefficients in asset
price models with stochastic volatility. More specifically, we assume that the
asset price process is given by where
is a Brownian motion, is a positive function, and
is a c\'{a}dl\'{a}g strong Markov process. The
random process is unobservable. We assume also that the asset price
is observed only at random times This is an
appropriate assumption when modelling high frequency financial data (e.g.,
tick-by-tick stock prices).
In the above setting the problem of estimation of can be approached
as a special nonlinear filtering problem with measurements generated by a
multivariate point process . While quite natural,
this problem does not fit into the standard diffusion or simple point process
filtering frameworks and requires more technical tools. We derive a closed form
optimal recursive Bayesian filter for , based on the observations
of . It turns out that the filter is
given by a recursive system that involves only deterministic Kolmogorov-type
equations, which should make the numerical implementation relatively easy
- …