8,195 research outputs found
Stochastic model predictive control for constrained networked control systems with random time delay
In this paper the continuous time stochastic constrained optimal control problem is formulated for the class of networked control systems assuming that time delays follow a discrete-time, finite Markov chain . Polytopic overapproximations of the system's trajectories are employed to produce a polyhedral inner approximation of the non-convex constraint set resulting from imposing the constraints in continuous time. The problem is cast in a Markov jump linear systems (MJLS) framework and a stochastic MPC controller is calculated explicitly, oine, coupling dynamic programming with parametric piecewise quadratic (PWQ) optimization. The calculated control law leads to stochastic stability of the closed loop system, in the mean square sense and respects the state and input constraints in continuous time
On control of discrete-time state-dependent jump linear systems with probabilistic constraints: A receding horizon approach
In this article, we consider a receding horizon control of discrete-time
state-dependent jump linear systems, particular kind of stochastic switching
systems, subject to possibly unbounded random disturbances and probabilistic
state constraints. Due to a nature of the dynamical system and the constraints,
we consider a one-step receding horizon. Using inverse cumulative distribution
function, we convert the probabilistic state constraints to deterministic
constraints, and obtain a tractable deterministic receding horizon control
problem. We consider the receding control law to have a linear state-feedback
and an admissible offset term. We ensure mean square boundedness of the state
variable via solving linear matrix inequalities off-line, and solve the
receding horizon control problem on-line with control offset terms. We
illustrate the overall approach applied on a macroeconomic system
On the Solution of Markov-switching Rational Expectations Models
This paper describes a method for solving a class of forward-looking Markov-switching Rational Expectations models under noisy measurement, by specifying the unobservable expectations component as a general-measurable function of the observable states of the system, to be determined optimally via stochastic control and filtering theory. Solution existence is proved by setting this function to the regime-dependent feedback control minimizing the mean-square deviation of the equilibrium path from the corresponding perfect-foresight autoregressive Markov jump state motion. As the exact expression of the conditional (rational) expectations term is derived both in finite and infinite horizon model formulations, no (asymptotic) stationarity assumptions are needed to solve forward the system, for only initial values knowledge is required. A simple sufficient condition for the mean-square stability of the obtained rational expectations equilibrium is also provided.Rational Expectations, Markov-switching dynamic systems, Dynamic programming, Time-varying Kalman filter
- …