348 research outputs found
Relaxing Fundamental Assumptions in Iterative Learning Control
Iterative learning control (ILC) is perhaps best decribed as an open loop feedforward control technique where the feedforward signal is learned through repetition of a single task. As the name suggests, given a dynamic system operating on a finite time horizon with the same desired trajectory, ILC aims to iteratively construct the inverse image (or its approximation) of the desired trajectory to improve transient tracking. In the literature, ILC is often interpreted as feedback control in the iteration domain due to the fact that learning controllers use information from past trials to drive the tracking error towards zero. However, despite the significant body of literature and powerful features, ILC is yet to reach widespread adoption by the control community, due to several assumptions that restrict its generality when compared to feedback control. In this dissertation, we relax some of these assumptions, mainly the fundamental invariance assumption, and move from the idea of learning through repetition to two dimensional systems, specifically repetitive processes, that appear in the modeling of engineering applications such as additive manufacturing, and sketch out future research directions for increased practicality: We develop an L1 adaptive feedback control based ILC architecture for increased robustness, fast convergence, and high performance under time varying uncertainties and disturbances. Simulation studies of the behavior of this combined L1-ILC scheme under iteration varying uncertainties lead us to the robust stability analysis of iteration varying systems, where we show that these systems are guaranteed to be stable when the ILC update laws are designed to be robust, which can be done using existing methods from the literature. As a next step to the signal space approach adopted in the analysis of iteration varying systems, we shift the focus of our work to repetitive processes, and show that the exponential stability of a nonlinear repetitive system is equivalent to that of its linearization, and consequently uniform stability of the corresponding state space matrix.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133232/1/altin_1.pd
Recommended from our members
Robot visual servoing with iterative learning control.
YesThis paper presents an iterative learning scheme for vision guided
robot trajectory tracking. At first, a stability criterion for designing
iterative learning controller is proposed. It can be used for a system with
initial resetting error. By using the criterion, one can convert the design
problem into finding a positive definite discrete matrix kernel and a more
general form of learning control can be obtained. Then, a three-dimensional
(3-D) trajectory tracking system with a single static camera to realize robot
movement imitation is presented based on this criterion
Recommended from our members
From Model-Based to Data-Driven Discrete-Time Iterative Learning Control
This dissertation presents a series of new results of iterative learning control (ILC) that progresses from model-based ILC algorithms to data-driven ILC algorithms. ILC is a type of trial-and-error algorithm to learn by repetitions in practice to follow a pre-defined finite-time maneuver with high tracking accuracy.
Mathematically ILC constructs a contraction mapping between the tracking errors of successive iterations, and aims to converge to a tracking accuracy approaching the reproducibility level of the hardware. It produces feedforward commands based on measurements from previous iterations to eliminates tracking errors from the bandwidth limitation of these feedback controllers, transient responses, model inaccuracies, unknown repeating disturbance, etc.
Generally, ILC uses an a priori model to form the contraction mapping that guarantees monotonic decay of the tracking error. However, un-modeled high frequency dynamics may destabilize the control system. The existing infinite impulse response filtering techniques to stop the learning at such frequencies, have initial condition issues that can cause an otherwise stable ILC law to become unstable. A circulant form of zero-phase filtering for finite-time trajectories is proposed here to avoid such issues. This work addresses the problem of possible lack of stability robustness when ILC uses an imperfect a prior model.
Besides the computation of feedforward commands, measurements from previous iterations can also be used to update the dynamic model. In other words, as the learning progresses, an iterative data-driven model development is made. This leads to adaptive ILC methods.
An indirect adaptive linear ILC method to speed up the desired maneuver is presented here. The updates of the system model are realized by embedding an observer in ILC to estimate the system Markov parameters. This method can be used to increase the productivity or to produce high tracking accuracy when the desired trajectory is too fast for feedback control to be effective.
When it comes to nonlinear ILC, data is used to update a progression of models along a homotopy, i.e., the ILC method presented in this thesis uses data to repeatedly create bilinear models in a homotopy approaching the desired trajectory. The improvement here makes use of Carleman bilinearized models to capture more nonlinear dynamics, with the potential for faster convergence when compared to existing methods based on linearized models.
The last work presented here finally uses model-free reinforcement learning (RL) to eliminate the need for an a priori model. It is analogous to direct adaptive control using data to directly produce the gains in the ILC law without use of a model. An off-policy RL method is first developed by extending a model-free model predictive control method and then applied in the trial domain for ILC. Adjustments of the ILC learning law and the RL recursion equation for state-value function updates allow the collection of enough data while improving the tracking accuracy without much safety concerns. This algorithm can be seen as the first step to bridge ILC and RL aiming to address nonlinear systems
Controlled switching in Kalman filtering and iterative learning controls
“Switching is not an uncommon phenomenon in practical systems and processes, for examples, power switches opening and closing, transmissions lifting from low gear to high gear, and air planes crossing different layers in air. Switching can be a disaster to a system since frequent switching between two asymptotically stable subsystems may result in unstable dynamics. On the contrary, switching can be a benefit to a system since controlled switching is sometimes imposed by the designers to achieve desired performance. This encourages the study of system dynamics and performance when undesired switching occurs or controlled switching is imposed. In this research, the controlled switching is applied to an estimation process and a multivariable Iterative Learning Control (ILC) system, and system stability as well as system performance under switching are investigated. The first article develops a controlled switching strategy for the estimation of a temporal shift in a Laser Tracker (LT). For some reason, the shift cannot be measured at all time. Therefore, a model-based predictor is adopted for estimation when the measurement is not available, and a Kalman Filter (KF) is used to update the estimate when the measurement is available. With the proposed method, the estimation uncertainty is always bounded within two predefined boundaries. The second article develops a controlled switching method for multivariable ILC systems where only partial outputs are measured at a time. Zero tracking error cannot be achieved for such systems using standard ILC due to incomplete knowledge of the outputs. With the developed controlled switching, all the outputs are measured in a sequential order, and, with each currently-measured output, the standard ILC is executed. Conditions under which zero convergent tracking error is accomplished with the proposed method are investigated. The proposed method is finally applied to solving a multi-agent coordination problem”--Abstract, page iv
Optimization-based iterative learning for precise quadrocopter trajectory tracking
Current control systems regulate the behavior of dynamic systems by reacting to noise and unexpected disturbances as they occur. To improve the performance of such control systems, experience from iterative executions can be used to anticipate recurring disturbances and proactively compensate for them. This paper presents an algorithm that exploits data from previous repetitions in order to learn to precisely follow a predefined trajectory. We adapt the feed-forward input signal to the system with the goal of achieving high tracking performance—even under the presence of model errors and other recurring disturbances. The approach is based on a dynamics model that captures the essential features of the system and that explicitly takes system input and state constraints into account. We combine traditional optimal filtering methods with state-of-the-art optimization techniques in order to obtain an effective and computationally efficient learning strategy that updates the feed-forward input signal according to a customizable learning objective. It is possible to define a termination condition that stops an execution early if the deviation from the nominal trajectory exceeds a given bound. This allows for a safe learning that gradually extends the time horizon of the trajectory. We developed a framework for generating arbitrary flight trajectories and for applying the algorithm to highly maneuverable autonomous quadrotor vehicles in the ETH Flying Machine Arena testbed. Experimental results are discussed for selected trajectories and different learning algorithm parameter
Hybrid intelligent machine systems : design, modeling and control
To further improve performances of machine systems, mechatronics offers some opportunities. Traditionally, mechatronics deals with how to integrate mechanics and electronics without a systematic approach. This thesis generalizes the concept of mechatronics into a new concept called hybrid intelligent machine system. A hybrid intelligent machine system is a system where two or more elements combine to play at least one of the roles such as sensor, actuator, or control mechanism, and contribute to the system behaviour. The common feature with the hybrid intelligent machine system is thus the presence of two or more entities responsible for the system behaviour with each having its different strength complementary to the others. The hybrid intelligent machine system is further viewed from the system’s structure, behaviour, function, and principle, which has led to the distinction of (1) the hybrid actuation system, (2) the hybrid motion system (mechanism), and (3) the hybrid control system. This thesis describes a comprehensive study on three hybrid intelligent machine systems. In the case of the hybrid actuation system, the study has developed a control method for the “true” hybrid actuation configuration in which the constant velocity motor is not “mimicked” by the servomotor which is treated in literature. In the case of the hybrid motion system, the study has resulted in a novel mechanism structure based on the compliant mechanism which allows the micro- and macro-motions to be integrated within a common framework. It should be noted that the existing designs in literature all take a serial structure for micro- and macro-motions. In the case of hybrid control system, a novel family of control laws is developed, which is primarily based on the iterative learning of the previous driving torque (as a feedforward part) and various feedback control laws. This new family of control laws is rooted in the computer-torque-control (CTC) law with an off-line learned torque in replacement of an analytically formulated torque in the forward part of the CTC law. This thesis also presents the verification of these novel developments by both simulation and experiments. Simulation studies are presented for the hybrid actuation system and the hybrid motion system while experimental studies are carried out for the hybrid control system
ON ITERATIVE LEARNING CONTROL FOR SOLVING NEW CONTROL PROBLEMS
Ph.DDOCTOR OF PHILOSOPH
Repetitive learning control for remote control systems
vii, 78 leaves : ill. ; 29 cm.Includes abstract and appendix.Includes bibliographical references (leaves 69-76).In this thesis, a Repetitive Learning Control (RLC) approach is proposed for a class of remote control nonlinear systems satisfying the global Lipschitz condition. The proposed approach is to deal with the remote tracking control problem when the environment is periodic over the infinite time domain. Since there exists a time delay, tracking a desired trajectory through a remote controller is not an easy task. A predictor is designed on the controller side to predict, the future state of the nonlinear system based on the delayed measurements from the sensor. The convergence of the estimation error of the predictor is ensured. The gain design of the predictor applies linear matrix inequality - LMI techniques. The repetitive learning control law is designed based on the feedback error from the predicted state. The proof of the stability is based on a constructed Lyapunov function. By incorporating the predictor and the RLC controller, the system state tracks the desired trajectory independently of the influence of time delays. A numerical simulation example is shown to illustrate the effectiveness of the proposed approach
Recommended from our members
Synthesis and Analysis of Design Methods in Linear Repetitive, Iterative Learning and Model Predictive Control
Repetitive Control (RC) seeks to converge to zero tracking error of a feedback control system performing periodic command as time progresses, or to cancel the influence of a periodic disturbance as time progresses, by observing the error in the previous period. Iterative Learning Control (ILC) is similar, it aims to converge to zero tracking error of system repeatedly performing the same task, and also adjusting the command to the feedback controller each repetition based on the error in the previous repetition. Compared to the conventional feedback control design methods, RC and ILC improve the performance over repetitions, and both aiming at zero tracking error in the real world instead of in a mathematical model. Linear Model Predictive Control (LMPC) normally does not aim for zero tracking error following a desired trajectory, but aims to minimize a quadratic cost function to the prediction horizon, and then apply the first control action. Then repeat the process each time step. The usual quadratic cost is a trade-off function between tracking accuracy and control effort and hence is not asking for zero error. It is also not specialized to periodic command or periodic disturbance as RC is, but does require that one knows the future desired command up to the prediction horizon.
The objective of this dissertation is to present various design schemes of improving the tracking performance in a control system based on ILC, RC and LMPC. The dissertation contains four major chapters. The first chapter studies the optimization of the design parameters, in particular as related to measurement noise, and the need of a cutoff filter when dealing with actuator limitations, robustness to model error. The results aim to guide the user in tuning the design parameters available when creating a repetitive control system. In the second chapter, we investigate how ILC laws can be converted for use in RC to improve performance. And robustification by adding control penalty in cost function is compared to use a frequency cutoff filter. The third chapter develops a method to create desired trajectories with a zero tracking interval without involving an unstable inverse solution. An easily implementable feedback version is created to optimize the same cost every time step from the current measured position. An ILC algorithm is also created to iteratively learn to give local zero error in the real world while using an imperfect model. This approach also gives a method to apply ILC to endpoint problem without specifying an arbitrary trajectory to follow to reach the endpoint. This creates a method for ILC to apply to such problems without asking for accurate tracking of a somewhat arbitrary trajectory to accomplish learning to reach the desired endpoint. The last chapter outlines a set of uses for a stable inverse in control applications, including Linear Model Predictive Control (LMPC), and LMPC applied to Repetitive Control (RC-LMPC), and a generalized form of a one-step ahead control. An important characteristic is that this approach has the property of converging to zero tracking error in a small number of time steps, which is finite time convergence instead of asymptotic convergence as time tends to infinity
- …