296 research outputs found
Optimal Preview Control of Markovian Jump Linear Systems
In this paper, we investigate the design of controllers, for discrete-time Markovian jump linear
systems, that achieve optimal reference tracking in the presence of preview (reference look-ahead). For
a quadratic cost and given a reference sequence, we obtain the optimal solution for the full information
case. The optimal control policy consists of the additive contribution of two terms: a feedforward
term and a feedback term. We show that the feedback term is identical to the standard optimal linear
quadratic regulator for Markovian jump linear systems. We provide explicit formulas for computing the
feedforward term, including an analysis of convergence
Design of optimal servomechanisms for Markovian jump linear systems (First Draft)
In this paper we investigate the design of controllers, for discrete-time Markovian jump linear systems, that achieve optimal reference tracking in the presence of preview. In particular, given a reference sequence, we obtain the optimal control law for the fully observed case, while the output feedback case is also briefly discussed. We provide the optimal control law for the infinite and finite optimization-horizon cases. The optimal control policy consists of the additive contribution of two terms: a feedforward term and a feedback term which is identical to the standard LQR solution. We provide explicit formulas for computing the feedforward term, while establishing a comparison with the internal model principle
Necessary and sufficient conditions for analysis and synthesis of markov jump linear systems with incomplete transition descriptions
This technical note is concerned with exploring a new approach for the analysis and synthesis for Markov jump linear systems with incomplete transition descriptions. In the study, not all the elements of the transition rate matrices (TRMs) in continuous-time domain, or transition probability matrices (TPMs) in discrete-time domain are assumed to be known. By fully considering the properties of the TRMs and TPMs, and the convexity of the uncertain domains, necessary and sufficient criteria of stability and stabilization are obtained in both continuous and discrete time. Numerical examples are used to illustrate the results. © 2006 IEEE.published_or_final_versio
Preview Tracking Control of Linear Periodic Switched Systems with Dwell Time
This paper studies the preview tracking control problem for linear discrete-time periodic switched systems. Firstly, an augmented error system is constructed for each subsystem by stabilizing the augmented error systems through the method of optimal preview control, and the tracking problem of the switched system is transformed into the switched stability problem of closed-loop augmented error systems. Secondly, a switched Lyapunov function method is applied to search the minimal dwell time satisfying the switched stability of the closed-loop augmented error systems. Thirdly, the switched preview control input is solved from the controller of the individual augmented error system, and then the sufficient conditions and the preview controller can be obtained to guarantee the solvability of the original periodic switched preview tracking problem. Finally, numerical simulations show the effectiveness of the stabilization design method
Gradient-based optimization techniques for the design of static controllers for Markov jump linear systems with unobservable modes
The paper formulates the static control problem of Markov jump linear systems, assuming that the controller does not have access to the jump variable. We derive the expression of the gradient for the cost motivated by the evaluation of 10 gradient-based optimization techniques. The numerical efficiency of these techniques is verified by using the data obtained from practical experiments. The corresponding solution is used to design a scheme to control the velocity of a real-time DC motor device subject to abrupt power failures
Robust Preview Control for a Class of Uncertain Discrete-Time Lipschitz Nonlinear Systems
© 2018 Xiao Yu et al. This paper considers the design of the robust preview controller for a class of uncertain discrete-time Lipschitz nonlinear systems. According to the preview control theory, an augmented error system including the tracking error and the known future information on the reference signal is constructed. To avoid static error, a discrete integrator is introduced. Using the linear matrix inequality (LMI) approach, a state feedback controller is developed to guarantee that the closed-loop system of the augmented error system is asymptotically stable with H∞ performance. Based on this, the robust preview tracking controller of the original system is obtained. Finally, two numerical examples are included to show the effectiveness of the proposed controller
On the control of Markov jump linear systems with no mode observation: Application to a DC Motor device
This paper deals with the control problem of discrete-time Markov jump linear systems for the case in which the controller does not have access to the state of the Markov chain. A necessary optimal condition, which is nonlinear with respect to the optimizing variables, is introduced, and the corresponding solution is obtained through a variational convergent method. We illustrate the practical usefulness of the derived approach by applying it in the control speed of a real DC Motor device subject to abrupt power failures
Optimal arbitrage under model uncertainty
In an equity market model with "Knightian" uncertainty regarding the relative
risk and covariance structure of its assets, we characterize in several ways
the highest return relative to the market that can be achieved using
nonanticipative investment rules over a given time horizon, and under any
admissible configuration of model parameters that might materialize. One
characterization is in terms of the smallest positive supersolution to a fully
nonlinear parabolic partial differential equation of the
Hamilton--Jacobi--Bellman type. Under appropriate conditions, this smallest
supersolution is the value function of an associated stochastic control
problem, namely, the maximal probability with which an auxiliary
multidimensional diffusion process, controlled in a manner which affects both
its drift and covariance structures, stays in the interior of the positive
orthant through the end of the time-horizon. This value function is also
characterized in terms of a stochastic game, and can be used to generate an
investment rule that realizes such best possible outperformance of the market.Comment: Published in at http://dx.doi.org/10.1214/10-AAP755 the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …