29,860 research outputs found
An Improved Constraint-Tightening Approach for Stochastic MPC
The problem of achieving a good trade-off in Stochastic Model Predictive
Control between the competing goals of improving the average performance and
reducing conservativeness, while still guaranteeing recursive feasibility and
low computational complexity, is addressed. We propose a novel, less
restrictive scheme which is based on considering stability and recursive
feasibility separately. Through an explicit first step constraint we guarantee
recursive feasibility. In particular we guarantee the existence of a feasible
input trajectory at each time instant, but we only require that the input
sequence computed at time remains feasible at time for most
disturbances but not necessarily for all, which suffices for stability. To
overcome the computational complexity of probabilistic constraints, we propose
an offline constraint-tightening procedure, which can be efficiently solved via
a sampling approach to the desired accuracy. The online computational
complexity of the resulting Model Predictive Control (MPC) algorithm is similar
to that of a nominal MPC with terminal region. A numerical example, which
provides a comparison with classical, recursively feasible Stochastic MPC and
Robust MPC, shows the efficacy of the proposed approach.Comment: Paper has been submitted to ACC 201
Stochastic Model Predictive Control with Discounted Probabilistic Constraints
This paper considers linear discrete-time systems with additive disturbances,
and designs a Model Predictive Control (MPC) law to minimise a quadratic cost
function subject to a chance constraint. The chance constraint is defined as a
discounted sum of violation probabilities on an infinite horizon. By penalising
violation probabilities close to the initial time and ignoring violation
probabilities in the far future, this form of constraint enables the
feasibility of the online optimisation to be guaranteed without an assumption
of boundedness of the disturbance. A computationally convenient MPC
optimisation problem is formulated using Chebyshev's inequality and we
introduce an online constraint-tightening technique to ensure recursive
feasibility based on knowledge of a suboptimal solution. The closed loop system
is guaranteed to satisfy the chance constraint and a quadratic stability
condition.Comment: 6 pages, Conference Proceeding
Active Classification for POMDPs: a Kalman-like State Estimator
The problem of state tracking with active observation control is considered
for a system modeled by a discrete-time, finite-state Markov chain observed
through conditionally Gaussian measurement vectors. The measurement model
statistics are shaped by the underlying state and an exogenous control input,
which influence the observations' quality. Exploiting an innovations approach,
an approximate minimum mean-squared error (MMSE) filter is derived to estimate
the Markov chain system state. To optimize the control strategy, the associated
mean-squared error is used as an optimization criterion in a partially
observable Markov decision process formulation. A stochastic dynamic
programming algorithm is proposed to solve for the optimal solution. To enhance
the quality of system state estimates, approximate MMSE smoothing estimators
are also derived. Finally, the performance of the proposed framework is
illustrated on the problem of physical activity detection in wireless body
sensing networks. The power of the proposed framework lies within its ability
to accommodate a broad spectrum of active classification applications including
sensor management for object classification and tracking, estimation of sparse
signals and radar scheduling.Comment: 38 pages, 6 figure
Necessary Condition for Near Optimal Control of Linear Forward-backward Stochastic Differential Equations
This paper investigates the near optimal control for a kind of linear
stochastic control systems governed by the forward backward stochastic
differential equations, where both the drift and diffusion terms are allowed to
depend on controls and the control domain is not assumed to be convex. In the
previous work (Theorem 3.1) of the second and third authors [\textit{%
Automatica} \textbf{46} (2010) 397-404], some problem of near optimal control
with the control dependent diffusion is addressed and our current paper can be
viewed as some direct response to it. The necessary condition of the
near-optimality is established within the framework of optimality variational
principle developed by Yong [\textit{SIAM J. Control Optim.} \textbf{48} (2010)
4119--4156] and obtained by the convergence technique to treat the optimal
control of FBSDEs in unbounded control domains by Wu [% \textit{Automatica}
\textbf{49} (2013) 1473--1480]. Some new estimates are given here to handle the
near optimality. In addition, an illustrating example is discussed as well.Comment: To appear in International Journal of Contro
Recommended from our members
Generalized Stochastic Gradient Learning
We study the properties of generalized stochastic gradient (GSG) learning in forwardlooking models. We examine how the conditions for stability of standard stochastic gradient (SG) learning both di1er from and are related to E-stability, which governs stability under least squares learning. SG algorithms are sensitive to units of measurement and we show that there is a transformation of variables for which E-stability governs SG stability. GSG algorithms with constant gain have a deeper justification in terms of parameter drift, robustness and risk sensitivity
- …