5,491 research outputs found
Robust gradient-based discrete-time iterative learning control algorithms
This paper considers the use of matrix models and the robustness of a gradient-based Iterative Learning Control (ILC) algorithm using both fixed learning gains and gains derived from parameter optimization. The philosophy of the paper is to ensure monotonic convergence with respect to the mean square value of the error time series. The paper provides a complete and rigorous analysis for the systematic use of matrix models in ILC. Matrix models make analysis clearer and provide necessary and sufficient conditions for robust monotonic convergence. They
also permit the construction of sufficient frequency domain conditions for robust monotonic convergence on finite time intervals for both causal and non-causal controller dynamics. The results are compared with recent results for robust inverse-model based ILC algorithms and it is seen
that the algorithm has the potential to improve robustness to high frequency modelling errors provided that resonances within the plant bandwidth have been suppressed by feedback or series compensation
A 2D systems approach to iterative learning control for discrete linear processes with zero Markov parameters
In this paper a new approach to iterative learning control for the practically relevant case of deterministic discrete linear plants with uniform rank greater than unity is developed. The analysis is undertaken in a 2D systems setting that, by using a strong form of stability for linear repetitive processes, allows simultaneous con-sideration of both trial-to-trial error convergence and along the trial performance, resulting in design algorithms that can be computed using Linear Matrix Inequalities (LMIs). Finally, the control laws are experimentally verified on a gantry robot that replicates a pick and place operation commonly found in a number of applications to which iterative learning control is applicable
Controlled switching in Kalman filtering and iterative learning controls
“Switching is not an uncommon phenomenon in practical systems and processes, for examples, power switches opening and closing, transmissions lifting from low gear to high gear, and air planes crossing different layers in air. Switching can be a disaster to a system since frequent switching between two asymptotically stable subsystems may result in unstable dynamics. On the contrary, switching can be a benefit to a system since controlled switching is sometimes imposed by the designers to achieve desired performance. This encourages the study of system dynamics and performance when undesired switching occurs or controlled switching is imposed. In this research, the controlled switching is applied to an estimation process and a multivariable Iterative Learning Control (ILC) system, and system stability as well as system performance under switching are investigated. The first article develops a controlled switching strategy for the estimation of a temporal shift in a Laser Tracker (LT). For some reason, the shift cannot be measured at all time. Therefore, a model-based predictor is adopted for estimation when the measurement is not available, and a Kalman Filter (KF) is used to update the estimate when the measurement is available. With the proposed method, the estimation uncertainty is always bounded within two predefined boundaries. The second article develops a controlled switching method for multivariable ILC systems where only partial outputs are measured at a time. Zero tracking error cannot be achieved for such systems using standard ILC due to incomplete knowledge of the outputs. With the developed controlled switching, all the outputs are measured in a sequential order, and, with each currently-measured output, the standard ILC is executed. Conditions under which zero convergent tracking error is accomplished with the proposed method are investigated. The proposed method is finally applied to solving a multi-agent coordination problem”--Abstract, page iv
Iterative Learning Control design for uncertain and time-windowed systems
Iterative Learning Control (ILC) is a control strategy capable of dramatically increasing the performance of systems that perform batch repetitive tasks. This performance improvement is achieved by iteratively updating the command signal, using measured error data from previous trials, i.e., by learning from past experience. This thesis deals with ILC for time-windowed and uncertain systems. With the term "time-windowed systems", we mean systems in which actuation and measurement time intervals differ. With "uncertain systems", we refer to systems whose behavior is represented by incomplete or inaccurate models. To study the ILC design issues for time-windowed systems, we consider the task of residual vibration suppression in point-to-point motion problems. In this application, time windows are used to modify the original system to comply with the task. With the properties of the time-windowed system resulting in nonconverging behavior of the original ILC controlled system, we introduce a novel ILC design framework in which convergence can be achieved. Additionally, this framework reveals new design freedom in ILC for point-to-point motion problems, which is unknown in "standard" ILC. Theoretical results concerning the problem formulation and control design for these systems are supported by experimental results on a SISO and MIMO flexible structure. The analysis and design results of ILC for time-windowed systems are subsequently extended to the whole class of linear systems whose input and output are filtered with basis functions (which include time windows). Analysis and design theory of ILC for this class of systems reveals how different ILC objectives can be reached by design of separate parts of the ILC controller. Our research on ILC for uncertain systems is divided into two parts. In the first part, we formulate an approach to analyze the robustness properties of existing ILC controllers, using well developed µ theory. To exemplify our findings, we analyze the robustness properties of linear quadratic (LQ) norm optimal ILC controllers. Moreover, we show that the approach is applicable to the class of linear trial invariant ILC controlled systems with basis functions. In the second part, we present a finite time interval robust ILC control strategy that is robust against model uncertainty as given by an additive uncertainty model. For that, we exploit H1 control theory, however, modified such that the controller is not restricted to be causal and operates on a finite time interval. Furthermore, we optimize the robust controller so as to optimize performance while remaining robustly monotonically convergent. By means of experiments on a SISO flexible system, we show that this control strategy can indeed outperform LQ norm optimal ILC and causal robust ILC control strategies
Recommended from our members
From Model-Based to Data-Driven Discrete-Time Iterative Learning Control
This dissertation presents a series of new results of iterative learning control (ILC) that progresses from model-based ILC algorithms to data-driven ILC algorithms. ILC is a type of trial-and-error algorithm to learn by repetitions in practice to follow a pre-defined finite-time maneuver with high tracking accuracy.
Mathematically ILC constructs a contraction mapping between the tracking errors of successive iterations, and aims to converge to a tracking accuracy approaching the reproducibility level of the hardware. It produces feedforward commands based on measurements from previous iterations to eliminates tracking errors from the bandwidth limitation of these feedback controllers, transient responses, model inaccuracies, unknown repeating disturbance, etc.
Generally, ILC uses an a priori model to form the contraction mapping that guarantees monotonic decay of the tracking error. However, un-modeled high frequency dynamics may destabilize the control system. The existing infinite impulse response filtering techniques to stop the learning at such frequencies, have initial condition issues that can cause an otherwise stable ILC law to become unstable. A circulant form of zero-phase filtering for finite-time trajectories is proposed here to avoid such issues. This work addresses the problem of possible lack of stability robustness when ILC uses an imperfect a prior model.
Besides the computation of feedforward commands, measurements from previous iterations can also be used to update the dynamic model. In other words, as the learning progresses, an iterative data-driven model development is made. This leads to adaptive ILC methods.
An indirect adaptive linear ILC method to speed up the desired maneuver is presented here. The updates of the system model are realized by embedding an observer in ILC to estimate the system Markov parameters. This method can be used to increase the productivity or to produce high tracking accuracy when the desired trajectory is too fast for feedback control to be effective.
When it comes to nonlinear ILC, data is used to update a progression of models along a homotopy, i.e., the ILC method presented in this thesis uses data to repeatedly create bilinear models in a homotopy approaching the desired trajectory. The improvement here makes use of Carleman bilinearized models to capture more nonlinear dynamics, with the potential for faster convergence when compared to existing methods based on linearized models.
The last work presented here finally uses model-free reinforcement learning (RL) to eliminate the need for an a priori model. It is analogous to direct adaptive control using data to directly produce the gains in the ILC law without use of a model. An off-policy RL method is first developed by extending a model-free model predictive control method and then applied in the trial domain for ILC. Adjustments of the ILC learning law and the RL recursion equation for state-value function updates allow the collection of enough data while improving the tracking accuracy without much safety concerns. This algorithm can be seen as the first step to bridge ILC and RL aiming to address nonlinear systems
Krotov: A Python implementation of Krotov's method for quantum optimal control
We present a new open-source Python package, krotov, implementing the quantum optimal control method of that name. It allows to determine time-dependent external fields for a wide range of quantum control problems, including state-to-state transfer, quantum gate implementation and optimization towards an arbitrary perfect entangler. Krotov's method compares to other gradient-based optimization methods such as gradient-ascent and guarantees monotonic convergence for approximately time-continuous control fields. The user-friendly interface allows for combination with other Python packages, and thus high-level customization
- …