18,599 research outputs found
A decomposition algorithm for feedback min-max model predictive control
Abstract-An algorithm for solving feedback min-max model predictive control for discrete time uncertain linear systems with constraints is presented in the paper. The algorithm solves the corresponding multi-stage min-max linear optimization problem. It is based on applying recursively a decomposition technique to solve the min-max problem via a sequence of low complexity linear programs. It is proved that the algorithm converges to the optimal solution in finite time. Simulation results are provided to compare the proposed algorithm with other approaches
Asymptotic Stability of POD based Model Predictive Control for a semilinear parabolic PDE
In this article a stabilizing feedback control is computed for a semilinear
parabolic partial differential equation utilizing a nonlinear model predictive
(NMPC) method. In each level of the NMPC algorithm the finite time horizon open
loop problem is solved by a reduced-order strategy based on proper orthogonal
decomposition (POD). A stability analysis is derived for the combined POD-NMPC
algorithm so that the lengths of the finite time horizons are chosen in order
to ensure the asymptotic stability of the computed feedback controls. The
proposed method is successfully tested by numerical examples
On feasibility, stability and performance in distributed model predictive control
In distributed model predictive control (DMPC), where a centralized
optimization problem is solved in distributed fashion using dual decomposition,
it is important to keep the number of iterations in the solution algorithm,
i.e. the amount of communication between subsystems, as small as possible. At
the same time, the number of iterations must be enough to give a feasible
solution to the optimization problem and to guarantee stability of the closed
loop system. In this paper, a stopping condition to the distributed
optimization algorithm that guarantees these properties, is presented. The
stopping condition is based on two theoretical contributions. First, since the
optimization problem is solved using dual decomposition, standard techniques to
prove stability in model predictive control (MPC), i.e. with a terminal cost
and a terminal constraint set that involve all state variables, do not apply.
For the case without a terminal cost or a terminal constraint set, we present a
new method to quantify the control horizon needed to ensure stability and a
prespecified performance. Second, the stopping condition is based on a novel
adaptive constraint tightening approach. Using this adaptive constraint
tightening approach, we guarantee that a primal feasible solution to the
optimization problem is found and that closed loop stability and performance is
obtained. Numerical examples show that the number of iterations needed to
guarantee feasibility of the optimization problem, stability and a prespecified
performance of the closed-loop system can be reduced significantly using the
proposed stopping condition
Error estimates for a tree structure algorithm solving finite horizon control problems
In the Dynamic Programming approach to optimal control problems a crucial
role is played by the value function that is characterized as the unique
viscosity solution of a Hamilton-Jacobi-Bellman (HJB) equation. It is well
known that this approach suffers of the "curse of dimensionality" and this
limitation has reduced its practical in real world applications. Here we
analyze a dynamic programming algorithm based on a tree structure. The tree is
built by the time discrete dynamics avoiding in this way the use of a fixed
space grid which is the bottleneck for high-dimensional problems, this also
drops the projection on the grid in the approximation of the value function. We
present some error estimates for a first order approximation based on the
tree-structure algorithm. Moreover, we analyze a pruning technique for the tree
to reduce the complexity and minimize the computational effort. Finally, we
present some numerical tests
Rate analysis of inexact dual first order methods: Application to distributed MPC for network systems
In this paper we propose and analyze two dual methods based on inexact
gradient information and averaging that generate approximate primal solutions
for smooth convex optimization problems. The complicating constraints are moved
into the cost using the Lagrange multipliers. The dual problem is solved by
inexact first order methods based on approximate gradients and we prove
sublinear rate of convergence for these methods. In particular, we provide, for
the first time, estimates on the primal feasibility violation and primal and
dual suboptimality of the generated approximate primal and dual solutions.
Moreover, we solve approximately the inner problems with a parallel coordinate
descent algorithm and we show that it has linear convergence rate. In our
analysis we rely on the Lipschitz property of the dual function and inexact
dual gradients. Further, we apply these methods to distributed model predictive
control for network systems. By tightening the complicating constraints we are
also able to ensure the primal feasibility of the approximate solutions
generated by the proposed algorithms. We obtain a distributed control strategy
that has the following features: state and input constraints are satisfied,
stability of the plant is guaranteed, whilst the number of iterations for the
suboptimal solution can be precisely determined.Comment: 26 pages, 2 figure
A Decomposition Approach to Multi-Vehicle Cooperative Control
We present methods that generate cooperative strategies for multi-vehicle
control problems using a decomposition approach. By introducing a set of tasks
to be completed by the team of vehicles and a task execution method for each
vehicle, we decomposed the problem into a combinatorial component and a
continuous component. The continuous component of the problem is captured by
task execution, and the combinatorial component is captured by task assignment.
In this paper, we present a solver for task assignment that generates
near-optimal assignments quickly and can be used in real-time applications. To
motivate our methods, we apply them to an adversarial game between two teams of
vehicles. One team is governed by simple rules and the other by our algorithms.
In our study of this game we found phase transitions, showing that the task
assignment problem is most difficult to solve when the capabilities of the
adversaries are comparable. Finally, we implement our algorithms in a
multi-level architecture with a variable replanning rate at each level to
provide feedback on a dynamically changing and uncertain environment.Comment: 36 pages, 19 figures, for associated web page see
http://control.mae.cornell.edu/earl/decom
- …