951 research outputs found
Constrained Finite Receding Horizon Linear Quadratic Control
Issues of feasibility, stability and performance are considered for a finite horizon formulation of receding horizon control (RHC) for linear systems under mixed linear state and control constraints. It is shown that for a sufficiently long horizon, a receding horizon policy will remain feasible and result in stability, even when no end constraint is imposed. In addition, offline finite horizon calculations can be used to determine not only a stabilizing horizon length, but guaranteed performance bounds for the receding horizon policy. These calculations are demonstrated on two examples
Finite-time behavior of inner systems
In this paper, we investigate how nonminimum phase characteristics of a dynamical system affect its controllability and tracking properties. For the class of linear time-invariant dynamical systems, these characteristics are determined by transmission zeros of the inner factor of the system transfer function. The relation between nonminimum phase zeros and Hankel singular values of inner systems is studied and it is shown how the singular value structure of a suitably defined operator provides relevant insight about system invertibility and achievable tracking performance. The results are used to solve various tracking problems both on finite as well as on infinite time horizons. A typical receding horizon control scheme is considered and new conditions are derived to guarantee stabilizability of a receding horizon controller
Unconstrained receding-horizon control of nonlinear systems
It is well known that unconstrained infinite-horizon optimal control may be used to construct a stabilizing controller for a nonlinear system. We show that similar stabilization results may be achieved using unconstrained finite horizon optimal control. The key idea is to approximate the tail of the infinite horizon cost-to-go using, as terminal cost, an appropriate control Lyapunov function. Roughly speaking, the terminal control Lyapunov function (CLF) should provide an (incremental) upper bound on the cost. In this fashion, important stability characteristics may be retained without the use of terminal constraints such as those employed by a number of other researchers. The absence of constraints allows a significant speedup in computation. Furthermore, it is shown that in order to guarantee stability, it suffices to satisfy an improvement property, thereby relaxing the requirement that truly optimal trajectories be found. We provide a complete analysis of the stability and region of attraction/operation properties of receding horizon control strategies that utilize finite horizon approximations in the proposed class. It is shown that the guaranteed region of operation contains that of the CLF controller and may be made as large as desired by increasing the optimization horizon (restricted, of course, to the infinite horizon domain). Moreover, it is easily seen that both CLF and infinite-horizon optimal control approaches are limiting cases of our receding horizon strategy. The key results are illustrated using a familiar example, the inverted pendulum, where significant improvements in guaranteed region of operation and cost are noted
State feedback policies for robust receding horizon control: uniqueness, continuity, and stability
Published versio
Distributed Receding Horizon Control with Application to Multi-Vehicle Formation Stabilization
We consider the control of interacting subsystems whose dynamics and constraints are uncoupled, but whose state vectors are coupled non-separably in a single centralized cost function of a finite horizon optimal control problem. For a given centralized cost structure, we generate distributed optimal control problems for each subsystem and establish that the distributed receding horizon implementation is asymptotically stabilizing. The communication requirements between subsystems with coupling in the cost function are that each subsystem obtain the previous optimal control trajectory of those subsystems at each receding horizon update. The key requirements for stability are that each distributed optimal control not deviate too far from the previous optimal control, and that the receding horizon updates happen sufficiently fast. The theory is applied in simulation for stabilization of a formation of vehicles
On the convergence of stochastic MPC to terminal modes of operation
The stability of stochastic Model Predictive Control (MPC) subject to
additive disturbances is often demonstrated in the literature by constructing
Lyapunov-like inequalities that guarantee closed-loop performance bounds and
boundedness of the state, but convergence to a terminal control law is
typically not shown. In this work we use results on general state space Markov
chains to find conditions that guarantee convergence of disturbed nonlinear
systems to terminal modes of operation, so that they converge in probability to
a priori known terminal linear feedback laws and achieve time-average
performance equal to that of the terminal control law. We discuss implications
for the convergence of control laws in stochastic MPC formulations, in
particular we prove convergence for two formulations of stochastic MPC
Adaptive Horizon Model Predictive Control and Al'brekht's Method
A standard way of finding a feedback law that stabilizes a control system to
an operating point is to recast the problem as an infinite horizon optimal
control problem. If the optimal cost and the optmal feedback can be found on a
large domain around the operating point then a Lyapunov argument can be used to
verify the asymptotic stability of the closed loop dynamics. The problem with
this approach is that is usually very difficult to find the optimal cost and
the optmal feedback on a large domain for nonlinear problems with or without
constraints. Hence the increasing interest in Model Predictive Control (MPC).
In standard MPC a finite horizon optimal control problem is solved in real time
but just at the current state, the first control action is implimented, the
system evolves one time step and the process is repeated. A terminal cost and
terminal feedback found by Al'brekht's methoddefined in a neighborhood of the
operating point is used to shorten the horizon and thereby make the nonlinear
programs easier to solve because they have less decision variables. Adaptive
Horizon Model Predictive Control (AHMPC) is a scheme for varying the horizon
length of Model Predictive Control (MPC) as needed. Its goal is to achieve
stabilization with horizons as small as possible so that MPC methods can be
used on faster and/or more complicated dynamic processes.Comment: arXiv admin note: text overlap with arXiv:1602.0861
- …