1,696 research outputs found
Stochastic Model Predictive Control with Discounted Probabilistic Constraints
This paper considers linear discrete-time systems with additive disturbances,
and designs a Model Predictive Control (MPC) law to minimise a quadratic cost
function subject to a chance constraint. The chance constraint is defined as a
discounted sum of violation probabilities on an infinite horizon. By penalising
violation probabilities close to the initial time and ignoring violation
probabilities in the far future, this form of constraint enables the
feasibility of the online optimisation to be guaranteed without an assumption
of boundedness of the disturbance. A computationally convenient MPC
optimisation problem is formulated using Chebyshev's inequality and we
introduce an online constraint-tightening technique to ensure recursive
feasibility based on knowledge of a suboptimal solution. The closed loop system
is guaranteed to satisfy the chance constraint and a quadratic stability
condition.Comment: 6 pages, Conference Proceeding
Stochastic Nonlinear Model Predictive Control with Efficient Sample Approximation of Chance Constraints
This paper presents a stochastic model predictive control approach for
nonlinear systems subject to time-invariant probabilistic uncertainties in
model parameters and initial conditions. The stochastic optimal control problem
entails a cost function in terms of expected values and higher moments of the
states, and chance constraints that ensure probabilistic constraint
satisfaction. The generalized polynomial chaos framework is used to propagate
the time-invariant stochastic uncertainties through the nonlinear system
dynamics, and to efficiently sample from the probability densities of the
states to approximate the satisfaction probability of the chance constraints.
To increase computational efficiency by avoiding excessive sampling, a
statistical analysis is proposed to systematically determine a-priori the least
conservative constraint tightening required at a given sample size to guarantee
a desired feasibility probability of the sample-approximated chance constraint
optimization problem. In addition, a method is presented for sample-based
approximation of the analytic gradients of the chance constraints, which
increases the optimization efficiency significantly. The proposed stochastic
nonlinear model predictive control approach is applicable to a broad class of
nonlinear systems with the sufficient condition that each term is analytic with
respect to the states, and separable with respect to the inputs, states and
parameters. The closed-loop performance of the proposed approach is evaluated
using the Williams-Otto reactor with seven states, and ten uncertain parameters
and initial conditions. The results demonstrate the efficiency of the approach
for real-time stochastic model predictive control and its capability to
systematically account for probabilistic uncertainties in contrast to a
nonlinear model predictive control approaches.Comment: Submitted to Journal of Process Contro
Cautious NMPC with Gaussian Process Dynamics for Autonomous Miniature Race Cars
This paper presents an adaptive high performance control method for
autonomous miniature race cars. Racing dynamics are notoriously hard to model
from first principles, which is addressed by means of a cautious nonlinear
model predictive control (NMPC) approach that learns to improve its dynamics
model from data and safely increases racing performance. The approach makes use
of a Gaussian Process (GP) and takes residual model uncertainty into account
through a chance constrained formulation. We present a sparse GP approximation
with dynamically adjusting inducing inputs, enabling a real-time implementable
controller. The formulation is demonstrated in simulations, which show
significant improvement with respect to both lap time and constraint
satisfaction compared to an NMPC without model learning
On the Comparison of Stochastic Model Predictive Control Strategies Applied to a Hydrogen-based Microgrid
In this paper, a performance comparison among three well-known stochastic model
predictive control approaches, namely, multi-scenario, tree-based, and chance-constrained
model predictive control is presented. To this end, three predictive controllers have
been designed and implemented in a real renewable-hydrogen-based microgrid. The
experimental set-up includes a PEM electrolyzer, lead-acid batteries, and a PEM fuel
cell as main equipment. The real experimental results show significant differences from
the plant components, mainly in terms of use of energy, for each implemented technique.
Effectiveness, performance, advantages, and disadvantages of these techniques
are extensively discussed and analyzed to give some valid criteria when selecting an
appropriate stochastic predictive controller.Ministerio de EconomÃa y Competitividad DPI2013-46912-C2-1-RMinisterio de EconomÃa y Competitividad DPI2013-482443-C2-1-
On controllability of neuronal networks with constraints on the average of control gains
Control gains play an important role in the control of a natural or a technical system since they reflect how much resource is required to optimize a certain control objective. This paper is concerned with the controllability of neuronal networks with constraints on the average value of the control gains injected in driver nodes, which are in accordance with engineering and biological backgrounds. In order to deal with the constraints on control gains, the controllability problem is transformed into a constrained optimization problem (COP). The introduction of the constraints on the control gains unavoidably leads to substantial difficulty in finding feasible as well as refining solutions. As such, a modified dynamic hybrid framework (MDyHF) is developed to solve this COP, based on an adaptive differential evolution and the concept of Pareto dominance. By comparing with statistical methods and several recently reported constrained optimization evolutionary algorithms (COEAs), we show that our proposed MDyHF is competitive and promising in studying the controllability of neuronal networks. Based on the MDyHF, we proceed to show the controlling regions under different levels of constraints. It is revealed that we should allocate the control gains economically when strong constraints are considered. In addition, it is found that as the constraints become more restrictive, the driver nodes are more likely to be selected from the nodes with a large degree. The results and methods presented in this paper will provide useful insights into developing new techniques to control a realistic complex network efficiently
- …