217 research outputs found
Stability and performance in MPC using a finite-tail cost
In this paper, we provide a stability and performance analysis of model
predictive control (MPC) schemes based on finite-tail costs. We study the MPC
formulation originally proposed by Magni et al. (2001) wherein the standard
terminal penalty is replaced by a finite-horizon cost of some stabilizing
control law. In order to analyse the closed loop, we leverage the more recent
technical machinery developed for MPC without terminal ingredients. For a
specified set of initial conditions, we obtain sufficient conditions for
stability and a performance bound in dependence of the prediction horizon and
the extended horizon used for the terminal penalty. The main practical benefit
of the considered finite-tail cost MPC formulation is the simpler offline
design in combination with typically significantly less restrictive bounds on
the prediction horizon to ensure stability. We demonstrate the benefits of the
considered MPC formulation using the classical example of a four tank system
Convex Chance Constrained Model Predictive Control
We consider the Chance Constrained Model Predictive Control problem for
polynomial systems subject to disturbances. In this problem, we aim at finding
optimal control input for given disturbed dynamical system to minimize a given
cost function subject to probabilistic constraints, over a finite horizon. The
control laws provided have a predefined (low) risk of not reaching the desired
target set. Building on the theory of measures and moments, a sequence of
finite semidefinite programmings are provided, whose solution is shown to
converge to the optimal solution of the original problem. Numerical examples
are presented to illustrate the computational performance of the proposed
approach.Comment: This work has been submitted to the 55th IEEE Conference on Decision
and Contro
Risk Sensitive Control of Markov Processes in Countable State Space
In this paper we consider infinite horizon risk-sensitive control of Markov processes with discrete time and denumerable state space. This problem is solved proving, under suitable conditions, that there exists a bounded solution to the dynamic programming equation. The dynamic programming equation is transformed into an Isaacs equation for a stochastic game, and the vanishing discount method is used to study its solution. In addition, we prove that the existence conditions are as well necessary
Stochastic MPC with Dynamic Feedback Gain Selection and Discounted Probabilistic Constraints
This paper considers linear discrete-time systems with additive disturbances,
and designs a Model Predictive Control (MPC) law incorporating a dynamic
feedback gain to minimise a quadratic cost function subject to a single chance
constraint. The feedback gain is selected from a set of candidates generated by
solutions of multiobjective optimisation problems solved by Dynamic Programming
(DP). We provide two methods for gain selection based on minimising upper
bounds on predicted costs. The chance constraint is defined as a discounted sum
of violation probabilities on an infinite horizon. By penalising violation
probabilities close to the initial time and ignoring violation probabilities in
the far future, this form of constraint allows for an MPC law with guarantees
of recursive feasibility without an assumption of boundedness of the
disturbance. A computationally convenient MPC optimisation problem is
formulated using Chebyshev's inequality and we introduce an online
constraint-tightening technique to ensure recursive feasibility. The closed
loop system is guaranteed to satisfy the chance constraint and a quadratic
stability condition. With dynamic feedback gain selection, the conservativeness
of Chebyshev's inequality is mitigated and closed loop cost is reduced with a
larger set of feasible initial conditions. A numerical example is given to show
these properties.Comment: 14 page
Reducing the Prediction Horizon in NMPC: An Algorithm Based Approach
In order to guarantee stability, known results for MPC without additional
terminal costs or endpoint constraints often require rather large prediction
horizons. Still, stable behavior of closed loop solutions can often be observed
even for shorter horizons. Here, we make use of the recent observation that
stability can be guaranteed for smaller prediction horizons via Lyapunov
arguments if more than only the first control is implemented. Since such a
procedure may be harmful in terms of robustness, we derive conditions which
allow to increase the rate at which state measurements are used for feedback
while maintaining stability and desired performance specifications. Our main
contribution consists in developing two algorithms based on the deduced
conditions and a corresponding stability theorem which ensures asymptotic
stability for the MPC closed loop for significantly shorter prediction
horizons.Comment: 6 pages, 3 figure
Energy-Efficient Transmission Scheduling with Strict Underflow Constraints
We consider a single source transmitting data to one or more receivers/users
over a shared wireless channel. Due to random fading, the wireless channel
conditions vary with time and from user to user. Each user has a buffer to
store received packets before they are drained. At each time step, the source
determines how much power to use for transmission to each user. The source's
objective is to allocate power in a manner that minimizes an expected cost
measure, while satisfying strict buffer underflow constraints and a total power
constraint in each slot. The expected cost measure is composed of costs
associated with power consumption from transmission and packet holding costs.
The primary application motivating this problem is wireless media streaming.
For this application, the buffer underflow constraints prevent the user buffers
from emptying, so as to maintain playout quality. In the case of a single user
with linear power-rate curves, we show that a modified base-stock policy is
optimal under the finite horizon, infinite horizon discounted, and infinite
horizon average expected cost criteria. For a single user with piecewise-linear
convex power-rate curves, we show that a finite generalized base-stock policy
is optimal under all three expected cost criteria. We also present the
sequences of critical numbers that complete the characterization of the optimal
control laws in each of these cases when some additional technical conditions
are satisfied. We then analyze the structure of the optimal policy for the case
of two users. We conclude with a discussion of methods to identify
implementable near-optimal policies for the most general case of M users.Comment: 109 pages, 11 pdf figures, template.tex is main file. We have
significantly revised the paper from version 1. Additions include the case of
a single receiver with piecewise-linear convex power-rate curves, the case of
two receivers, and the infinite horizon average expected cost proble
- …