755 research outputs found
The Controllability of Planar Bilinear Systems
endorsement of any of the University of Pennsylvania's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it. NOTE: At the time of publication, the author Daniel Koditschek was affiliated with Yale University. Currently, he is a faculty member of the School of Engineering at the University of Pennsylvania. This paper is posted at ScholarlyCommons
On stabilization of bilinear uncertain time-delay stochastic systems with Markovian jumping parameters
Copyright [2002] IEEE. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Brunel University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.In this paper, we investigate the stochastic stabilization problem for a class of bilinear continuous time-delay uncertain systems with Markovian jumping parameters. Specifically, the stochastic bilinear jump system under study involves unknown state time-delay, parameter uncertainties, and unknown nonlinear deterministic disturbances. The jumping parameters considered here form a continuous-time discrete-state homogeneous Markov process. The whole system may be regarded as a stochastic bilinear hybrid system that includes both time-evolving and event-driven mechanisms. Our attention is focused on the design of a robust state-feedback controller such that, for all admissible uncertainties as well as nonlinear disturbances, the closed-loop system is stochastically exponentially stable in the mean square, independent of the time delay. Sufficient conditions are established to guarantee the existence of desired robust controllers, which are given in terms of the solutions to a set of either linear matrix inequalities (LMIs), or coupled quadratic matrix inequalities. The developed theory is illustrated by numerical simulatio
On feedback stabilization of linear switched systems via switching signal control
Motivated by recent applications in control theory, we study the feedback
stabilizability of switched systems, where one is allowed to chose the
switching signal as a function of in order to stabilize the system. We
propose new algorithms and analyze several mathematical features of the problem
which were unnoticed up to now, to our knowledge. We prove complexity results,
(in-)equivalence between various notions of stabilizability, existence of
Lyapunov functions, and provide a case study for a paradigmatic example
introduced by Stanford and Urbano.Comment: 19 pages, 3 figure
Finite-time behavior of inner systems
In this paper, we investigate how nonminimum phase characteristics of a dynamical system affect its controllability and tracking properties. For the class of linear time-invariant dynamical systems, these characteristics are determined by transmission zeros of the inner factor of the system transfer function. The relation between nonminimum phase zeros and Hankel singular values of inner systems is studied and it is shown how the singular value structure of a suitably defined operator provides relevant insight about system invertibility and achievable tracking performance. The results are used to solve various tracking problems both on finite as well as on infinite time horizons. A typical receding horizon control scheme is considered and new conditions are derived to guarantee stabilizability of a receding horizon controller
The turnpike property in finite-dimensional nonlinear optimal control
Turnpike properties have been established long time ago in finite-dimensional
optimal control problems arising in econometry. They refer to the fact that,
under quite general assumptions, the optimal solutions of a given optimal
control problem settled in large time consist approximately of three pieces,
the first and the last of which being transient short-time arcs, and the middle
piece being a long-time arc staying exponentially close to the optimal
steady-state solution of an associated static optimal control problem. We
provide in this paper a general version of a turnpike theorem, valuable for
nonlinear dynamics without any specific assumption, and for very general
terminal conditions. Not only the optimal trajectory is shown to remain
exponentially close to a steady-state, but also the corresponding adjoint
vector of the Pontryagin maximum principle. The exponential closedness is
quantified with the use of appropriate normal forms of Riccati equations. We
show then how the property on the adjoint vector can be adequately used in
order to initialize successfully a numerical direct method, or a shooting
method. In particular, we provide an appropriate variant of the usual shooting
method in which we initialize the adjoint vector, not at the initial time, but
at the middle of the trajectory
Estimation for bilinear stochastic systems
Three techniques for the solution of bilinear estimation problems are presented. First, finite dimensional optimal nonlinear estimators are presented for certain bilinear systems evolving on solvable and nilpotent lie groups. Then the use of harmonic analysis for estimation problems evolving on spheres and other compact manifolds is investigated. Finally, an approximate estimation technique utilizing cumulants is discussed
- …