170 research outputs found

    Discrete-time integral MRAC with minimal controller synthesis and parameter projection

    Get PDF
    Model reference adaptive controllers with Minimal Control Synthesis are effective control algorithms to guarantee asymptotic convergence of the tracking error to zero not only for disturbance-free uncertain linear systems, but also for highly nonlinear plants with unknown parameters, unmodeled dynamics and subject to perturbations. However, an apparent drift in adaptive gains may occasionally arise, which can eventually lead to closed-loop instability. In this paper, we address this key issue for discrete-time systems under L-2 disturbances using a parameter projection algorithm. A consistent proof of stability of all the closed-loop signals is provided, while tracking error is shown to asymptotically converge to zero. We also show the applicability of the adaptive algorithm for digitally controlled continuous-time plants. The proposed algorithm is numerically validated taking into account a discrete-time LTI system subject to parameter uncertainty, parameter variations and L-2 disturbances. Finally, as a possible engineering application of this novel adaptive strategy, the control of a highly nonlinear electromechanical actuator is considered. (C) 2015 The Franldin Institute. Published by Elsevier Ltd. All rights reserved.Postprint (author's final draft

    Integral MRAC with Minimal Controller Synthesis and bounded adaptive gains: The continuous-time case

    Get PDF
    Model reference adaptive controllers designed via the Minimal Control Synthesis (MCS) approach are a viable solution to control plants affected by parameter uncertainty, unmodelled dynamics, and disturbances. Despite its effectiveness to impose the required reference dynamics, an apparent drift of the adaptive gains, which can eventually lead to closed-loop instability or alter tracking performance, may occasionally be induced by external disturbances. This problem has been recently addressed for this class of adaptive algorithms in the discrete-time case and for square-integrable perturbations by using a parameter projection strategy [1]. In this paper we tackle systematically this issue for MCS continuous-time adaptive systems with integral action by enhancing the adaptive mechanism not only with a parameter projection method, but also embedding a s-modification strategy. The former is used to preserve convergence to zero of the tracking error when the disturbance is bounded and L2, while the latter guarantees global uniform ultimate boundedness under continuous L8 disturbances. In both cases, the proposed control schemes ensure boundedness of all the closed-loop signals. The strategies are numerically validated by considering systems subject to different kinds of disturbances. In addition, an electrical power circuit is used to show the applicability of the algorithms to engineering problems requiring a precise tracking of a reference profile over a long time range despite disturbances, unmodelled dynamics, and parameter uncertainty.Postprint (author's final draft

    Adaptive Systems: History, Techniques, Problems, and Perspectives

    Get PDF
    We survey some of the rich history of control over the past century with a focus on the major milestones in adaptive systems. We review classic methods and examples in adaptive linear systems for both control and observation/identification. The focus is on linear plants to facilitate understanding, but we also provide the tools necessary for many classes of nonlinear systems. We discuss practical issues encountered in making these systems stable and robust with respect to additive and multiplicative uncertainties. We discuss various perspectives on adaptive systems and their role in various fields. Finally, we present some of the ongoing research and expose problems in the field of adaptive control

    Adaptive control of plants with input saturation: an approach for performance improvement

    Get PDF
    In this work, a new method for adaptive control of plants with input saturation is presented. The new anti-windup scheme can be shown to result in bounded closed-loop states under certain conditions on the plant and the initial closed-loop states. As an improvement in comparison to existing methods in adaptive control, a new degree of freedom is introduced in the control scheme. It allows to improve the closed-loop response when actually encountering input saturation without changing the closed-loop performance for unconstrained inputs.Diese Arbeit präsentiert eine neue Methode für die adaptive Regelung von Strecken mit Stellgrößenbegrenzung. Für das neue anti-windup Verfahren wird gezeigt, dass die Zustände des Regelkreises begrenzt bleiben, wenn dessen initiale Werte und die Regelstrecke bestimmte Bedingungen erfüllen. Eine Verbesserung im Vergleich zu existierenden Methoden wird durch die Einführung eines zusätzlichen Freiheitsgrades erzielt. Dieser erlaubt die Verbesserung der Regelgüte des geschlossenen Regelkreises, wenn das Eingangssignal sich in der Limitierung befindet, ohne diese sonst zu verändern

    Adaptive Control of a Generic Hypersonic Vehicle

    Get PDF
    This paper presents an adaptive augmented, gain-scheduled baseline LQR-PI controller applied to the Road Runner six-degree-of-freedom generic hypersonic vehicle model. Uncertainty in control effectiveness, longitudinal center of gravity location, and aerodynamic coefficients are introduced in the model, as well as sensor bias and noise, and input time delays. The performance of the baseline controller is compared to the same design augmented with one of two different model-reference adaptive controllers: a classical open- loop reference model design, and modified closed-loop reference model design. Both adaptive controllers show improved command tracking and stability over the baseline controller when subject to these uncertainties. The closed-loop reference model controller offers the best performance, tolerating a reduced control effectiveness of 50%, rearward center of gravity shift of up to -1.6 feet (11% of vehicle length), aerodynamic coefficient uncertainty scaled 4Ă— the nominal value, and sensor bias of up to +3.2 degrees on sideslip angle measurement. The closed-loop reference model adaptive controller maintains at least 70% of the delay margin provided by the robust baseline design when subject to varying levels of uncertainty, tolerating input time delays of between 15-41 ms during 3 degree angle of attack doublet, and 80 degree roll step commands.Approved for Public Release; Distribution Unlimited. Case Number 88ABW-2013-3392

    Adapt-to-learn policy transfer in reinforcement learning and deep model reference adaptive control

    Get PDF
    Adaptation and Learning from exploration have been a key in biological learning; Humans and animals do not learn every task in isolation; rather are able to quickly adapt the learned behaviors between similar tasks and learn new skills when presented with new situations. Inspired by this, adaptation has been an important direction of research in control as Adaptive Controllers. However, the Adaptive Controllers like Model Reference Adaptive Controller are mainly model-based controllers and do not rely on exploration instead make informed decisions exploiting the model's structure. Therefore such controllers are characterized by high sample efficiency and stability conditions and, therefore, suitable for safety-critical systems. On the other hand, we have Learning-based optimal control algorithms like Reinforcement Learning. Reinforcement learning is a trial and error method, where an agent explores the environment by taking random action and maximizing the likelihood of those particular actions that result in a higher return. However, these exploration techniques are expected to fail many times before exploring optimal policy. Therefore, they are highly sample-expensive and lack stability guarantees and hence not suitable for safety-critical systems. This thesis presents control algorithms for robotics where the best of both worlds that is ``Adaptation'' and ``Learning from exploration'' are brought together to propose new algorithms that can perform better than their conventional counterparts. In this effort, we first present an Adapt to learn policy transfer Algorithm, where we use control theoretical ideas of adaptation to transfer policy between two related but different tasks using the policy gradient method of reinforcement learning. Efficient and robust policy transfer remains a key challenge in reinforcement learning. Policy transfer through warm initialization, imitation, or interacting over a large set of agents with randomized instances, have been commonly applied to solve a variety of Reinforcement Learning (RL) tasks. However, this is far from how behavior transfer happens in the biological world: Here, we seek to answer the question: Will learning to combine adaptation reward with environmental reward lead to a more efficient transfer of policies between domains? We introduce a principled mechanism that can ``Adapt-to-Learn", which is adapt the source policy to learn to solve a target task with significant transition differences and uncertainties. Through theory and experiments, we show that our method leads to a significantly reduced sample complexity of transferring the policies between the tasks. In the second part of this thesis, information-enabled learning-based adaptive controllers like ``Gaussian Process adaptive controller using Model Reference Generative Network'' (GP-MRGeN), ``Deep Model Reference Adaptive Controller'' (DMRAC) are presented. Model reference adaptive control (MRAC) is a widely studied adaptive control methodology that aims to ensure that a nonlinear plant with significant model uncertainty behaves like a chosen reference model. MRAC methods try to adapt the system to changes by representing the system uncertainties as weighted combinations of known nonlinear functions and using weight update law that ensures that network weights are moved in the direction of minimizing the instantaneous tracking error. However, most MRAC adaptive controllers use a shallow network and only the instantaneous data for adaptation, restricting their representation capability and limiting their performance under fast-changing uncertainties and faults in the system. In this thesis, we propose a Gaussian process based adaptive controller called GP-MRGeN. We present a new approach to the online supervised training of GP models using a new architecture termed as Model Reference Generative Network (MRGeN). Our architecture is very loosely inspired by the recent success of generative neural network models. Nevertheless, our contributions ensure that the inclusion of such a model in closed-loop control does not affect the stability properties. The GP-MRGeN controller, through using a generative network, is capable of achieving higher adaptation rates without losing robustness properties of the controller, hence suitable for mitigating faults in fast-evolving systems. Further, in this thesis, we present a new neuroadaptive architecture: Deep Neural Network-based Model Reference Adaptive Control. This architecture utilizes deep neural network representations for modeling significant nonlinearities while marrying it with the boundedness guarantees that characterize MRAC based controllers. We demonstrate through simulations and analysis that DMRAC can subsume previously studied learning-based MRAC methods, such as concurrent learning and GP-MRAC. This makes DMRAC a highly powerful architecture for high-performance control of nonlinear systems with long-term learning properties. Theoretical proofs of the controller generalizing capability over unseen data points and boundedness properties of the tracking error are also presented. Experiments with the quadrotor vehicle demonstrate the controller performance in achieving reference model tracking in the presence of significant matched uncertainties. A software+communication architecture is designed to ensure online real-time inference of the deep network on a high-bandwidth computation-limited platform to achieve these results. These results demonstrate the efficacy of deep networks for high bandwidth closed-loop attitude control of unstable and nonlinear robots operating in adverse situations. We expect that this work will benefit other closed-loop deep-learning control architectures for robotics

    Research on optimal control, stabilization and computational algorithms for aerospace applications

    Get PDF
    The research carried out in the areas of optimal control and estimation theory and its applications under this grant is reviewed. A listing of the 257 publications that document the research results is presented

    Development of adaptive control methodologies and algorithms for nonlinear dynamic systems based on u-control framework

    Get PDF
    Inspired by the U-model based control system design (or called U-control system design), this study is mainly divided into three parts. The first one is a U-model based control system for unstable non-minimum phase system. Pulling theorems are proposed to apply zeros pulling filters and poles pulling filters to pass the unstable non-minimum phase characteristics of the plant model/system. The zeros pulling filters and poles pulling filters derive from a customised desired minimum phase plant model. The remaining controller design can be any classic control systems or U-model based control system. The difference between classic control systems and U-model based control system for unstable non-minimum phase will be shown in the case studies.Secondly, the U-model framework is proposed to integrate the direct model reference adaptive control with MIT normalised rules for nonlinear dynamic systems. The U-model based direct model reference adaptive control is defined as an enhanced direct model reference adaptive control expanding the application range from linear system to nonlinear system. The estimated parameter of the nonlinear dynamic system will be placement as the estimated gain of a customised linear virtual plant model with MIT normalised rules. The customised linear virtual plant model is the same form as the reference model. Moreover, the U-model framework is design for the nonlinear dynamic system within the root inversion.Thirdly, similar to the structure of the U-model based direct model reference adaptive control with MIT normalised rules, the U-model based direct model reference adaptive control with Lyapunov algorithms proposes a linear virtual plant model as well, estimated and adapted the particular parameters as the estimated gain which of the nonlinear plant model by Lyapunov algorithms. The root inversion such as Newton-Ralphson algorithm provides the simply and concise method to obtain the inversion of the nonlinear system without the estimated gain. The proposed U-model based direct control system design approach is applied to develop the controller for a nonlinear system to implement the linear adaptive control. The computational experiments are presented to validate the effectiveness and efficiency of the proposed U-model based direct model reference adaptive control approach and stabilise with satisfied performance as applying for the linear plant model

    Relaxing Fundamental Assumptions in Iterative Learning Control

    Full text link
    Iterative learning control (ILC) is perhaps best decribed as an open loop feedforward control technique where the feedforward signal is learned through repetition of a single task. As the name suggests, given a dynamic system operating on a finite time horizon with the same desired trajectory, ILC aims to iteratively construct the inverse image (or its approximation) of the desired trajectory to improve transient tracking. In the literature, ILC is often interpreted as feedback control in the iteration domain due to the fact that learning controllers use information from past trials to drive the tracking error towards zero. However, despite the significant body of literature and powerful features, ILC is yet to reach widespread adoption by the control community, due to several assumptions that restrict its generality when compared to feedback control. In this dissertation, we relax some of these assumptions, mainly the fundamental invariance assumption, and move from the idea of learning through repetition to two dimensional systems, specifically repetitive processes, that appear in the modeling of engineering applications such as additive manufacturing, and sketch out future research directions for increased practicality: We develop an L1 adaptive feedback control based ILC architecture for increased robustness, fast convergence, and high performance under time varying uncertainties and disturbances. Simulation studies of the behavior of this combined L1-ILC scheme under iteration varying uncertainties lead us to the robust stability analysis of iteration varying systems, where we show that these systems are guaranteed to be stable when the ILC update laws are designed to be robust, which can be done using existing methods from the literature. As a next step to the signal space approach adopted in the analysis of iteration varying systems, we shift the focus of our work to repetitive processes, and show that the exponential stability of a nonlinear repetitive system is equivalent to that of its linearization, and consequently uniform stability of the corresponding state space matrix.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133232/1/altin_1.pd
    • …
    corecore