678 research outputs found

    Controlling Chaos Faster

    Full text link
    Predictive Feedback Control is an easy-to-implement method to stabilize unknown unstable periodic orbits in chaotic dynamical systems. Predictive Feedback Control is severely limited because asymptotic convergence speed decreases with stronger instabilities which in turn are typical for larger target periods, rendering it harder to effectively stabilize periodic orbits of large period. Here, we study stalled chaos control, where the application of control is stalled to make use of the chaotic, uncontrolled dynamics, and introduce an adaptation paradigm to overcome this limitation and speed up convergence. This modified control scheme is not only capable of stabilizing more periodic orbits than the original Predictive Feedback Control but also speeds up convergence for typical chaotic maps, as illustrated in both theory and application. The proposed adaptation scheme provides a way to tune parameters online, yielding a broadly applicable, fast chaos control that converges reliably, even for periodic orbits of large period

    Implementation Challenges for Multivariable Control: What You Did Not Learn in School

    Get PDF
    Multivariable control allows controller designs that can provide decoupled command tracking and robust performance in the presence of modeling uncertainties. Although the last two decades have seen extensive development of multivariable control theory and example applications to complex systems in software/hardware simulations, there are no production flying systems aircraft or spacecraft, that use multivariable control. This is because of the tremendous challenges associated with implementation of such multivariable control designs. Unfortunately, the curriculum in schools does not provide sufficient time to be able to provide an exposure to the students in such implementation challenges. The objective of this paper is to share the lessons learned by a practitioner of multivariable control in the process of applying some of the modern control theory to the Integrated Flight Propulsion Control (IFPC) design for an advanced Short Take-Off Vertical Landing (STOVL) aircraft simulation

    Quantum extended crystal PDE's

    Full text link
    Our recent results on {\em extended crystal PDE's} are generalized to PDE's in the category QS\mathfrak{Q}_S of quantum supermanifolds. Then obstructions to the existence of global quantum smooth solutions for such equations are obtained, by using algebraic topologic techniques. Applications are considered in details to the quantum super Yang-Mills equations. Furthermore, our geometric theory of stability of PDE's and their solutions, is also generalized to quantum extended crystal PDE's. In this way we are able to identify quantum equations where their global solutions are stable at finite times. These results, are also extended to quantum singular (super)PDE's, introducing ({\em quantum extended crystal singular (super) PDE's}).Comment: 43 pages, 1 figur

    Algebraic geometric methods for the stabilizability and reliability of multivariable and of multimode systems

    Get PDF
    The extent to which feedback can alter the dynamic characteristics (e.g., instability, oscillations) of a control system, possibly operating in one or more modes (e.g., failure versus nonfailure of one or more components) is examined

    The Whirling Blade and the Steaming Cauldron

    Get PDF
    Ths dissertation applies recent theoretical developments in control to two practical examples. The first example is control of the primary circuit of a pressurized water nuclear reactor. This is an interesting example because the plant is complex and its dynamics vary greatly over the operating range of interest. The second example is a thrust-vectored ducted fan engine, a nonlinear flight control experiment at Caltech. The main part of this dissertation is the application of linear parameter-dependent control techniques to the examples. The synthesis technique is based on the solution of linear matrix inequalities (LMIs) and produces a controller whch acheves specified performance against the worst-case time variation of measurable parameters entering the plant in a linear fractional manner. Thus the plant can have widely varying dynamics over the operating range, a quality possessed by both examples. The controllers designed with these methods perform extremely well and are compared to H∞, gain-scheduled, and nonlinear controllers. Additionally, an in-depth examination of the model of the ducted fan is performed, including system identification. From this work, we proceed to apply various techniques to examine what they can tell us in the context of a practical example. The primary technique is LMI-based model validation. The contribution ths dissertation makes is to show that parameter-dependent control techniques can be applied with great effectiveness to practical applications. Moreover, the trade-off between modelling and controller performance is examined in some detail. Finally, we demonstrate the applicability of recent model validation techruques in practice, and discuss stabilizability issues

    Learning Stable Koopman Models for Identification and Control of Dynamical Systems

    Get PDF
    Learning models of dynamical systems from data is a widely-studied problem in control theory and machine learning. One recent approach for modelling nonlinear systems considers the class of Koopman models, which embeds the nonlinear dynamics in a higher-dimensional linear subspace. Learning a Koopman embedding would allow for the analysis and control of nonlinear systems using tools from linear systems theory. Many recent methods have been proposed for data-driven learning of such Koopman embeddings, but most of these methods do not consider the stability of the Koopman model. Stability is an important and desirable property for models of dynamical systems. Unstable models tend to be non-robust to input perturbations and can produce unbounded outputs, which are both undesirable when the model is used for prediction and control. In addition, recent work has shown that stability guarantees may act as a regularizer for model fitting. As such, a natural direction would be to construct Koopman models with inherent stability guarantees. Two new classes of Koopman models are proposed that bridge the gap between Koopman-based methods and learning stable nonlinear models. The first model class is guaranteed to be stable, while the second is guaranteed to be stabilizable with an explicit stabilizing controller that renders the model stable in closed-loop. Furthermore, these models are unconstrained in their parameter sets, thereby enabling efficient optimization via gradient-based methods. Theoretical connections between the stability of Koopman models and forms of nonlinear stability such as contraction are established. To demonstrate the effect of the stability guarantees, the stable Koopman model is applied to a system identification problem, while the stabilizable model is applied to an imitation learning problem. Experimental results show empirically that the proposed models achieve better performance over prior methods without stability guarantees
    • …
    corecore