6 research outputs found

    On control of discrete-time state-dependent jump linear systems with probabilistic constraints: A receding horizon approach

    Full text link
    In this article, we consider a receding horizon control of discrete-time state-dependent jump linear systems, particular kind of stochastic switching systems, subject to possibly unbounded random disturbances and probabilistic state constraints. Due to a nature of the dynamical system and the constraints, we consider a one-step receding horizon. Using inverse cumulative distribution function, we convert the probabilistic state constraints to deterministic constraints, and obtain a tractable deterministic receding horizon control problem. We consider the receding control law to have a linear state-feedback and an admissible offset term. We ensure mean square boundedness of the state variable via solving linear matrix inequalities off-line, and solve the receding horizon control problem on-line with control offset terms. We illustrate the overall approach applied on a macroeconomic system

    Bias analysis in mode-based Kalman filters for stochastic hybrid systems

    Get PDF
    Doctor of PhilosophyDepartment of Electrical and Computer EngineeringBalasubramaniam NatarajanStochastic hybrid system (SHS) is a class of dynamical systems that experience interaction of both discrete mode and continuous dynamics with uncertainty. State estimation for SHS has attracted research interests for decades with Kalman filter based solutions dominating the area. Mode-based Kalman filter is an extended version of the traditional Kalman filter for SHS. In general, as Kalman filter is unbiased for non-hybrid system estimation, prior research efforts primarily focus on the behavior of error covariance. In SHS state estimate, mode mismatch errors could result in a bias in the mode-based Kalman filter and have impacts on the continuous state estimation quality. The relationship between mode mismatch errors and estimation stability is an open problem that this dissertation attempts to address. Specifically, the probabilistic model of mode mismatch errors can be independent and identically distributed (i.i.d.), correlated across different modes and correlated across time. The proposed approach builds on the idea of modeling the bias evolution as a transformed system. The statistical convergence of the bias dynamics is then mapped to the stability of the transformed system. For each specific model of the mode mismatch error, the system matrix of the transformed system varies which results in challenges for the stability analysis. For the first time, the dissertation derives convergence conditions that provide tolerance regions for the mode mismatch error for three mode mismatch situations. The convergence conditions are derived based on generalized spectral radius theorem, Lyapunov theorem, Schur stability of a matrix polytope and interval matrix method. This research is fundamental in nature and its application is widespread. For example, the spatially and timely correlated mode mismatch errors can effectively capture cyber-attacks and communication link impairments in a cyber-physical system. Therefore, the theory and techniques developed in this dissertation can be used to analyze topology errors in any networked system such as smart grid, smart home, transportation, flight management system etc. The main results provide new insights on the fidelity in discrete state knowledge needed to maintain the performance of a mode-based Kalman filter and provide guidance on design of estimation strategies for SHS

    Control Of Receding Horizon Of Linear Systems With Markovian Jumps For The Problem Of Tracking With Dynamic Targets [controle De Horizonte Retrocedente De Sistemas Lineares Com Saltos Markovianos Para O Problema De Rastreamento Com Alvos Dinâmicos]

    No full text
    We study the solution for the tracking problem of receding horizon control of discrete-time Markov jump linear systems subject to noisy inputs, switching targets and jumps in the exogenous input variables. The performance index is quadratic and the information available to the controller does not involve observations of Markov chain states. A fixed sequence of state linear feedback gains is adopted to solve the control synthesis problem. Necessary conditions of optimality is provided and we propose an recursive method based on a variational procedure which attains the necessary conditions. An application to an economic model is presented.164435448Bitmead, R.R., Gevers, M., Wertz, V., (1990) Adaptive Optimal Control:the Thinking Man's GPC, , Prentice Hall, Sydney, AustraliaCamacho, E.F., Bordons, C., (1999) Model Predictive Control, , Springer-Verlag, LondonÇinlar, E., (1975) Introduction to Stochastic Processes, , Prentice Hall, New YorkCosta, E.F., Do Val, J.B.R., Stability of receding horizon control of Markov jump linear systems without jump observations (2000) American Control Conference, pp. 4289-4293. , Chicago, USACosta, O.L.V., Do Val, J.B.R., Jump LQ-optimal control for discrete-time Markovian systems with stochastic inputs (1998) Stochastic Analysis and Applications, 16 (5), pp. 843-858Costa, O.L.V., Fragoso, M.D., Stability results for discrete-time linear systems with Markovian jumping parameters (1993) Journal of Mathematical Analysis and Aplications, 179, pp. 154-178Costa, O.L.V., Fragoso, M.D., Discrete-time LQ-optimal control problems for finite Markov jump parameters systems (1995) IEEE Transactions on Automatic Control, 40, pp. 2076-2088Dennis, R., The policy preferences of the U.S. Federal Reserve (2001) Federal Reserve Bank of San Francisco, Working Papers in Applied Economic Theory, 8Do Val, J.B.R., Başar, T., Receding horizon control of jump linear systems and a macroeconomic policy problem (1999) Journal of Economics Dynamics and Control, 23, pp. 1099-1131Ji, Y., Chizeck, H.J., Controllability, stabilizability and continuous-time Markovian jump linear quadratic control (1990) IEEE Transactions on Automatic Control, 35 (7), pp. 777-788Kwon, W.H., Bruckstein, A.M., Kailath, T., Stabilizing state-feedback design via the moving horizon method (1983) International Journal of Control, 37, pp. 631-643Mayne, D.Q., Rawlings, J.B., R., C.V., Scokaert, P.O.M., Constrained model predictive control: Stability and optimality (2000) Automatica, 36, pp. 789-814Mayne, D.Q., Michalska, H., Receding horizon control of nonlinear systems (1990) IEEE Transactions on Automatic Control, 35, pp. 814-824Mosca, E., (1995) Optimal, Predictive, and Adaptive Control, , Prentice HallPark, B.-G., Kwon, W.H., Robust one-step receding horizon control of discrete-time Markovian jump uncertain systems (2002) Automatica, 38 (7), pp. 1229-1235Park, B.-G., Kwon, W.H., Lee, J.-W., Robust receding horizon control of discrete-time Markovian jump uncertain systems (2001) IEICE Trans. on Fundamentals, 9, pp. 2272-2279Rudebusch, G.D., Svensson, L.E.O., Policy rules for inflation targeting (1998) National Bureau of Economic Research Conference on Monetary Policy RulesVargas, A.N., (2004) Controle Por Horizonte Retrocedente de Sistemas Lineares Com Saltos Markovianos e Ruído Aditivo, , http://libdigi.unicamp.br/, Dissertação de Mestrado, Universidade Estadual de Campinas. Disponível eletronicamente e

    Optimality Condition For The Receding Horizon Control Of Markov Jump Linear Systems With Non-observed Chain And Linear Feedback Controls

    No full text
    We demonstrate here that a necessary condition of optimality studied in a previous paper is in fact a necessary and sufficient condition of optimality for the receding horizon control problem of discrete-time Markov jump linear systems subject to noisy inputs. The performance index is quadratic and the information available to the controller does not involve observations of Markov chain states. Sequences of linear feedback gains that are independent of the Markov state is adopted, in accordance with the information available to the controller. We make use of an equivalent deterministic form of expressing the stochastic problem, and the complete solution given in feedback form, is obtained by dynamic programming arguments and by the benefit of some quadratic convex relations. © 2005 IEEE.200573087313Vargas, A.N., do Val, J.B.R., Costa, E.F., Receding horizon control of Markov jump linear systems subject to noise and unobservable state chain (2004) 43th IEEE Conference on Decision and Control, pp. 4381-4386. , Paradise Island, The BahamasCosta, O.L.V., Fragoso, M.D., Marques, R.P., (2004) Discrete-Time Markovian Jump Linear Systems, , New York: Springer-VerlagCosta, O.L.V., Fragoso, M.D., Stability results for discrete-time linear systems with Markovian jumping parameters (1993) Journal of Mathematical Analysis and Aplications, 179, pp. 154-178Costa, O.L.V., Fragoso, M.D., Discrete-time LQ-optimal control problems for finite Markov jump parameters systems (1995) IEEE Transactions on Automatic Control, 40, pp. 2076-2088Ji, Y., Chizeck, H.J., Controllability, stabilizability and continuous-time Markovian jump linear quadratic control (1990) IEEE Transactions on Automatic Control, 35 (7), pp. 777-788Ji, Y., Chizeck, H.J., Jump linear quadratic Gaussian control: Steady-state solution and testable conditions (1990) Control-Theory and Advanced Technology, 6 (3), pp. 289-319. , SeptemberBitmead, R.R., Gevers, M., Wertz, V., (1990) Adaptive Optimal Control:the thinking man's GPC, , Sydney, Australia: Prentice HallMosca, E., (1994) Optimal, Predictive and Adaptive Control, , Prentice HallMayne, D.Q., Rawlings, J.B., Rao, C.V., Scokaert, P.O.M., (2000) Constrained model predictive control: Stability and optimality, 36 (6), pp. 789-814Magni, L., Scattolini, R., Stabilizing model predictive control of nonlinear continuous time systems (2004) Annual Reviews in Control, 28, pp. 1-11Costa, E.F., do Val, J.B.R., Stability of receding horizon control of Markov jump linear systems without jump observations (2000) American Control Conference, pp. 4289-4293. , Chicago, USAdo Val, J.B.R., Başar, T., Receding horizon control of jump linear systems and a macroeconomic policy problem (1999) Journal of Economics Dynamics and Control, 23, pp. 1099-1131Park, B.-G., Kwon, W.H., Robust one-step receding horizon control of discrete-time Markovian jump uncertain systems (2002) Automatica, 38 (7), pp. 1229-1235Park, B.-G., Kwon, W.H., Lee, J.-W., Robust receding horizon control of discrete-time Markovian jump uncertain systems (2001) IEICE Trans. on Fundamentals, 9, pp. 2272-2279Davis, M.H.A., Vinter, R.B., (1984) Stochastic Modelling and Control, , Chapman and HallLutkepohl, H., (1996) Handbook of Matrices, , Chichester: John Wiley & SonsNeudecker, H., Some theorems on matrix differentiation with special reference to Kronecker matrix products (1969) American Statistical Association Journal, pp. 953-963Brewer, J.W., Kronecker products and matrix calculus in system theory (1978) IEEE Transactions on Circuits and Systems, 25 (9), pp. 772-78
    corecore