3,685 research outputs found
Learning an Approximate Model Predictive Controller with Guarantees
A supervised learning framework is proposed to approximate a model predictive
controller (MPC) with reduced computational complexity and guarantees on
stability and constraint satisfaction. The framework can be used for a wide
class of nonlinear systems. Any standard supervised learning technique (e.g.
neural networks) can be employed to approximate the MPC from samples. In order
to obtain closed-loop guarantees for the learned MPC, a robust MPC design is
combined with statistical learning bounds. The MPC design ensures robustness to
inaccurate inputs within given bounds, and Hoeffding's Inequality is used to
validate that the learned MPC satisfies these bounds with high confidence. The
result is a closed-loop statistical guarantee on stability and constraint
satisfaction for the learned MPC. The proposed learning-based MPC framework is
illustrated on a nonlinear benchmark problem, for which we learn a neural
network controller with guarantees.Comment: 6 pages, 3 figures, to appear in IEEE Control Systems Letter
A Statistical Learning Theory Approach for Uncertain Linear and Bilinear Matrix Inequalities
In this paper, we consider the problem of minimizing a linear functional
subject to uncertain linear and bilinear matrix inequalities, which depend in a
possibly nonlinear way on a vector of uncertain parameters. Motivated by recent
results in statistical learning theory, we show that probabilistic guaranteed
solutions can be obtained by means of randomized algorithms. In particular, we
show that the Vapnik-Chervonenkis dimension (VC-dimension) of the two problems
is finite, and we compute upper bounds on it. In turn, these bounds allow us to
derive explicitly the sample complexity of these problems. Using these bounds,
in the second part of the paper, we derive a sequential scheme, based on a
sequence of optimization and validation steps. The algorithm is on the same
lines of recent schemes proposed for similar problems, but improves both in
terms of complexity and generality. The effectiveness of this approach is shown
using a linear model of a robot manipulator subject to uncertain parameters.Comment: 19 pages, 2 figures, Accepted for Publication in Automatic
Investigation of air transportation technology at Princeton University, 1991-1992
The Air Transportation Research Program at Princeton University proceeded along six avenues during the past year: (1) intelligent flight control; (2) computer-aided control system design; (3) neural networks for flight control; (4) stochastic robustness of flight control systems; (5) microburst hazards to aircraft; and (6) fundamental dynamics of atmospheric flight. This research has resulted in a number of publications, including archival papers and conference papers. An annotated bibliography of publications that appeared between June 1991 and June 1992 appears at the end of this report. The research that these papers describe was supported in whole or in part by the Joint University Program, including work that was completed prior to the reporting period
Experience Transfer for Robust Direct Data-Driven Control
Learning-based control uses data to design efficient controllers for specific
systems. When multiple systems are involved, experience transfer usually
focuses on data availability and controller performance yet neglects robustness
to variations between systems. In contrast, this letter explores experience
transfer from a robustness perspective. We leverage the transfer to design
controllers that are robust not only to the uncertainty regarding an individual
agent's model but also to the choice of agent in a fleet. Experience transfer
enables the design of safe and robust controllers that work out of the box for
all systems in a heterogeneous fleet. Our approach combines scenario
optimization and recent formulations for direct data-driven control without the
need to estimate a model of the system or determine uncertainty bounds for its
parameters. We demonstrate the benefits of our data-driven robustification
method through a numerical case study and obtain learned controllers that
generalize well from a small number of open-loop trajectories in a quadcopter
simulation
On control of discrete-time state-dependent jump linear systems with probabilistic constraints: A receding horizon approach
In this article, we consider a receding horizon control of discrete-time
state-dependent jump linear systems, particular kind of stochastic switching
systems, subject to possibly unbounded random disturbances and probabilistic
state constraints. Due to a nature of the dynamical system and the constraints,
we consider a one-step receding horizon. Using inverse cumulative distribution
function, we convert the probabilistic state constraints to deterministic
constraints, and obtain a tractable deterministic receding horizon control
problem. We consider the receding control law to have a linear state-feedback
and an admissible offset term. We ensure mean square boundedness of the state
variable via solving linear matrix inequalities off-line, and solve the
receding horizon control problem on-line with control offset terms. We
illustrate the overall approach applied on a macroeconomic system
- …