8,551 research outputs found

    Approximate Kalman-Bucy filter for continuous-time semi-Markov jump linear systems

    Full text link
    The aim of this paper is to propose a new numerical approximation of the Kalman-Bucy filter for semi-Markov jump linear systems. This approximation is based on the selection of typical trajectories of the driving semi-Markov chain of the process by using an optimal quantization technique. The main advantage of this approach is that it makes pre-computations possible. We derive a Lipschitz property for the solution of the Riccati equation and a general result on the convergence of perturbed solutions of semi-Markov switching Riccati equations when the perturbation comes from the driving semi-Markov chain. Based on these results, we prove the convergence of our approximation scheme in a general infinite countable state space framework and derive an error bound in terms of the quantization error and time discretization step. We employ the proposed filter in a magnetic levitation example with markovian failures and compare its performance with both the Kalman-Bucy filter and the Markovian linear minimum mean squares estimator

    Predictive maintenance for the heated hold-up tank

    Full text link
    We present a numerical method to compute an optimal maintenance date for the test case of the heated hold-up tank. The system consists of a tank containing a fluid whose level is controlled by three components: two inlet pumps and one outlet valve. A thermal power source heats up the fluid. The failure rates of the components depends on the temperature, the position of the three components monitors the liquid level in the tank and the liquid level determines the temperature. Therefore, this system can be modeled by a hybrid process where the discrete (components) and continuous (level, temperature) parts interact in a closed loop. We model the system by a piecewise deterministic Markov process, propose and implement a numerical method to compute the optimal maintenance date to repair the components before the total failure of the system.Comment: arXiv admin note: text overlap with arXiv:1101.174

    A Multilevel Approach for Stochastic Nonlinear Optimal Control

    Full text link
    We consider a class of finite time horizon nonlinear stochastic optimal control problem, where the control acts additively on the dynamics and the control cost is quadratic. This framework is flexible and has found applications in many domains. Although the optimal control admits a path integral representation for this class of control problems, efficient computation of the associated path integrals remains a challenging Monte Carlo task. The focus of this article is to propose a new Monte Carlo approach that significantly improves upon existing methodology. Our proposed methodology first tackles the issue of exponential growth in variance with the time horizon by casting optimal control estimation as a smoothing problem for a state space model associated with the control problem, and applying smoothing algorithms based on particle Markov chain Monte Carlo. To further reduce computational cost, we then develop a multilevel Monte Carlo method which allows us to obtain an estimator of the optimal control with O(ϵ2)\mathcal{O}(\epsilon^2) mean squared error with a computational cost of O(ϵ2log(ϵ)2)\mathcal{O}(\epsilon^{-2}\log(\epsilon)^2). In contrast, a computational cost of O(ϵ3)\mathcal{O}(\epsilon^{-3}) is required for existing methodology to achieve the same mean squared error. Our approach is illustrated on two numerical examples, which validate our theory

    Approximation of the invariant measure with an Euler scheme for Stochastic PDE's driven by Space-Time White Noise

    Full text link
    In this article, we consider a stochastic PDE of parabolic type, driven by a space-time white-noise, and its numerical discretization in time with a semi-implicit Euler scheme. When the nonlinearity is assumed to be bounded, then a dissipativity assumption is satisfied, which ensures that the SDPE admits a unique invariant probability measure, which is ergodic and strongly mixing - with exponential convergence to equilibrium. Considering test functions of class C2\mathcal{C}^2, bounded and with bounded derivatives, we prove that we can approximate this invariant measure using the numerical scheme, with order 1/2 with respect to the time step

    Multigrid methods for two-player zero-sum stochastic games

    Full text link
    We present a fast numerical algorithm for large scale zero-sum stochastic games with perfect information, which combines policy iteration and algebraic multigrid methods. This algorithm can be applied either to a true finite state space zero-sum two player game or to the discretization of an Isaacs equation. We present numerical tests on discretizations of Isaacs equations or variational inequalities. We also present a full multi-level policy iteration, similar to FMG, which allows to improve substantially the computation time for solving some variational inequalities.Comment: 31 page

    Numerical method for impulse control of Piecewise Deterministic Markov Processes

    Full text link
    This paper presents a numerical method to calculate the value function for a general discounted impulse control problem for piecewise deterministic Markov processes. Our approach is based on a quantization technique for the underlying Markov chain defined by the post jump location and inter-arrival time. Convergence results are obtained and more importantly we are able to give a convergence rate of the algorithm. The paper is illustrated by a numerical example.Comment: This work was supported by ARPEGE program of the French National Agency of Research (ANR), project "FAUTOCOES", number ANR-09-SEGI-00

    Optimal Stabilization using Lyapunov Measures

    Full text link
    Numerical solutions for the optimal feedback stabilization of discrete time dynamical systems is the focus of this paper. Set-theoretic notion of almost everywhere stability introduced by the Lyapunov measure, weaker than conventional Lyapunov function-based stabilization methods, is used for optimal stabilization. The linear Perron-Frobenius transfer operator is used to pose the optimal stabilization problem as an infinite dimensional linear program. Set-oriented numerical methods are used to obtain the finite dimensional approximation of the linear program. We provide conditions for the existence of stabilizing feedback controls and show the optimal stabilizing feedback control can be obtained as a solution of a finite dimensional linear program. The approach is demonstrated on stabilization of period two orbit in a controlled standard map
    corecore