31,976 research outputs found
Learning an Approximate Model Predictive Controller with Guarantees
A supervised learning framework is proposed to approximate a model predictive
controller (MPC) with reduced computational complexity and guarantees on
stability and constraint satisfaction. The framework can be used for a wide
class of nonlinear systems. Any standard supervised learning technique (e.g.
neural networks) can be employed to approximate the MPC from samples. In order
to obtain closed-loop guarantees for the learned MPC, a robust MPC design is
combined with statistical learning bounds. The MPC design ensures robustness to
inaccurate inputs within given bounds, and Hoeffding's Inequality is used to
validate that the learned MPC satisfies these bounds with high confidence. The
result is a closed-loop statistical guarantee on stability and constraint
satisfaction for the learned MPC. The proposed learning-based MPC framework is
illustrated on a nonlinear benchmark problem, for which we learn a neural
network controller with guarantees.Comment: 6 pages, 3 figures, to appear in IEEE Control Systems Letter
Integral MRAC with Minimal Controller Synthesis and bounded adaptive gains: The continuous-time case
Model reference adaptive controllers designed via the Minimal Control Synthesis (MCS) approach are a viable solution to control plants affected by parameter uncertainty, unmodelled dynamics, and disturbances. Despite its effectiveness to impose the required reference dynamics, an apparent drift of the adaptive gains, which can eventually lead to closed-loop instability or alter tracking performance, may occasionally be induced by external disturbances. This problem has been recently addressed for this class of adaptive algorithms in the discrete-time case and for square-integrable perturbations by using a parameter projection strategy [1]. In this paper we tackle systematically this issue for MCS continuous-time adaptive systems with integral action by enhancing the adaptive mechanism not only with a parameter projection method, but also embedding a s-modification strategy. The former is used to preserve convergence to zero of the tracking error when the disturbance is bounded and L2, while the latter guarantees global uniform ultimate boundedness under continuous L8 disturbances. In both cases, the proposed control schemes ensure boundedness of all the closed-loop signals. The strategies are numerically validated by considering systems subject to different kinds of disturbances. In addition, an electrical power circuit is used to show the applicability of the algorithms to engineering problems requiring a precise tracking of a reference profile over a long time range despite disturbances, unmodelled dynamics, and parameter uncertainty.Postprint (author's final draft
Recent Sikorsky R and D progress
The recent activities and progress in four specific areas of Sikorsky's research and development program are summarized. Since the beginning of the S-76 design in 1974, Sikorsky has been aggressively developing the technology for using composite materials in helicopter design. Four specific topics are covered: advanced cockpit/controller efforts, fly-by-wire controls on RSRA/X-Wing, vibration control via higher harmonic control, and main rotor aerodynamic improvements
Recommended from our members
Probability density estimation with tunable kernels using orthogonal forward regression
A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately
- …