4,319 research outputs found
Models and Feedback Stabilization of Open Quantum Systems
At the quantum level, feedback-loops have to take into account measurement
back-action. We present here the structure of the Markovian models including
such back-action and sketch two stabilization methods: measurement-based
feedback where an open quantum system is stabilized by a classical controller;
coherent or autonomous feedback where a quantum system is stabilized by a
quantum controller with decoherence (reservoir engineering). We begin to
explain these models and methods for the photon box experiments realized in the
group of Serge Haroche (Nobel Prize 2012). We present then these models and
methods for general open quantum systems.Comment: Extended version of the paper attached to an invited conference for
the International Congress of Mathematicians in Seoul, August 13 - 21, 201
A Gaussian Mixture PHD Filter for Jump Markov System Models
The probability hypothesis density (PHD) filter is an attractive approach to tracking an unknown and time-varying number of targets in the presence of data association uncertainty, clutter, noise, and detection uncertainty. The PHD filter admits a closed-form solution for a linear Gaussian multi-target model. However, this model is not general enough to accommodate maneuvering targets that switch between several models. In this paper, we generalize the notion of linear jump Markov systems to the multiple target case to accommodate births, deaths, and switching dynamics. We then derive a closed-form solution to the PHD recursion for the proposed linear Gaussian jump Markov multi-target model. Based on this an efficient method for tracking multiple maneuvering targets that switch between a set of linear Gaussian models is developed. An analytic implementation of the PHD filter using statistical linear regression technique is also proposed for targets that switch between a set of nonlinear models. We demonstrate through simulations that the proposed PHD filters are effective in tracking multiple maneuvering targets
Approximate Kalman-Bucy filter for continuous-time semi-Markov jump linear systems
The aim of this paper is to propose a new numerical approximation of the
Kalman-Bucy filter for semi-Markov jump linear systems. This approximation is
based on the selection of typical trajectories of the driving semi-Markov chain
of the process by using an optimal quantization technique. The main advantage
of this approach is that it makes pre-computations possible. We derive a
Lipschitz property for the solution of the Riccati equation and a general
result on the convergence of perturbed solutions of semi-Markov switching
Riccati equations when the perturbation comes from the driving semi-Markov
chain. Based on these results, we prove the convergence of our approximation
scheme in a general infinite countable state space framework and derive an
error bound in terms of the quantization error and time discretization step. We
employ the proposed filter in a magnetic levitation example with markovian
failures and compare its performance with both the Kalman-Bucy filter and the
Markovian linear minimum mean squares estimator
Optimal LQG Control Across a Packet-Dropping Link
We examine optimal Linear Quadratic Gaussian control for a system in which communication between the sensor (output of the plant) and the controller occurs across a packet-dropping link. We extend the familiar LQG separation principle to this problem that allows us to solve this problem using a standard LQR state-feedback design, along with an optimal algorithm for propagating and using the information across the unreliable link. We present one such optimal algorithm, which consists of a Kalman Filter at the sensor side of the link, and a switched linear filter at the controller side. Our design does not assume any statistical model of the packet drop events, and is thus optimal for an arbitrary packet drop pattern. Further, the solution is appealing from a practical point of view because it can be implemented as a small modification of an existing LQG control design
Optimal control of continuous-time Markov chains with noise-free observation
We consider an infinite horizon optimal control problem for a continuous-time
Markov chain in a finite set with noise-free partial observation. The
observation process is defined as , , where is a
given map defined on . The observation is noise-free in the sense that the
only source of randomness is the process itself. The aim is to minimize a
discounted cost functional and study the associated value function . After
transforming the control problem with partial observation into one with
complete observation (the separated problem) using filtering equations, we
provide a link between the value function associated to the latter control
problem and the original value function . Then, we present two different
characterizations of (and indirectly of ): on one hand as the unique
fixed point of a suitably defined contraction mapping and on the other hand as
the unique constrained viscosity solution (in the sense of Soner) of a HJB
integro-differential equation. Under suitable assumptions, we finally prove the
existence of an optimal control
- …