1,298 research outputs found

    Sparse Iterative Learning Control with Application to a Wafer Stage: Achieving Performance, Resource Efficiency, and Task Flexibility

    Get PDF
    Trial-varying disturbances are a key concern in Iterative Learning Control (ILC) and may lead to inefficient and expensive implementations and severe performance deterioration. The aim of this paper is to develop a general framework for optimization-based ILC that allows for enforcing additional structure, including sparsity. The proposed method enforces sparsity in a generalized setting through convex relaxations using â„“1\ell_1 norms. The proposed ILC framework is applied to the optimization of sampling sequences for resource efficient implementation, trial-varying disturbance attenuation, and basis function selection. The framework has a large potential in control applications such as mechatronics, as is confirmed through an application on a wafer stage.Comment: 12 pages, 14 figure

    Distributionally Robust Chance Constrained Data-enabled Predictive Control

    Full text link
    We study the problem of finite-time constrained optimal control of unknown stochastic linear time-invariant systems, which is the key ingredient of a predictive control algorithm -- albeit typically having access to a model. We propose a novel distributionally robust data-enabled predictive control (DeePC) algorithm which uses noise-corrupted input/output data to predict future trajectories and compute optimal control inputs while satisfying output chance constraints. The algorithm is based on (i) a non-parametric representation of the subspace spanning the system behaviour, where past trajectories are sorted in Page or Hankel matrices; and (ii) a distributionally robust optimization formulation which gives rise to strong probabilistic performance guarantees. We show that for certain objective functions, DeePC exhibits strong out-of-sample performance, and at the same time respects constraints with high probability. The algorithm provides an end-to-end approach to control design for unknown stochastic linear time-invariant systems. We illustrate the closed-loop performance of the DeePC in an aerial robotics case study

    Relaxing Fundamental Assumptions in Iterative Learning Control

    Full text link
    Iterative learning control (ILC) is perhaps best decribed as an open loop feedforward control technique where the feedforward signal is learned through repetition of a single task. As the name suggests, given a dynamic system operating on a finite time horizon with the same desired trajectory, ILC aims to iteratively construct the inverse image (or its approximation) of the desired trajectory to improve transient tracking. In the literature, ILC is often interpreted as feedback control in the iteration domain due to the fact that learning controllers use information from past trials to drive the tracking error towards zero. However, despite the significant body of literature and powerful features, ILC is yet to reach widespread adoption by the control community, due to several assumptions that restrict its generality when compared to feedback control. In this dissertation, we relax some of these assumptions, mainly the fundamental invariance assumption, and move from the idea of learning through repetition to two dimensional systems, specifically repetitive processes, that appear in the modeling of engineering applications such as additive manufacturing, and sketch out future research directions for increased practicality: We develop an L1 adaptive feedback control based ILC architecture for increased robustness, fast convergence, and high performance under time varying uncertainties and disturbances. Simulation studies of the behavior of this combined L1-ILC scheme under iteration varying uncertainties lead us to the robust stability analysis of iteration varying systems, where we show that these systems are guaranteed to be stable when the ILC update laws are designed to be robust, which can be done using existing methods from the literature. As a next step to the signal space approach adopted in the analysis of iteration varying systems, we shift the focus of our work to repetitive processes, and show that the exponential stability of a nonlinear repetitive system is equivalent to that of its linearization, and consequently uniform stability of the corresponding state space matrix.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/133232/1/altin_1.pd

    Data-Driven Predictive Control for Multi-Agent Decision Making With Chance Constraints

    Full text link
    In the recent literature, significant and substantial efforts have been dedicated to the important area of multi-agent decision-making problems. Particularly here, the model predictive control (MPC) methodology has demonstrated its effectiveness in various applications, such as mobile robots, unmanned vehicles, and drones. Nevertheless, in many specific scenarios involving the MPC methodology, accurate and effective system identification is a commonly encountered challenge. As a consequence, the overall system performance could be significantly weakened in outcome when the traditional MPC algorithm is adopted under such circumstances. To cater to this rather major shortcoming, this paper investigates an alternate data-driven approach to solve the multi-agent decision-making problem. Utilizing an innovative modified methodology with suitable closed-loop input/output measurements that comply with the appropriate persistency of excitation condition, a non-parametric predictive model is suitably constructed. This non-parametric predictive model approach in the work here attains the key advantage of alleviating the rather heavy computational burden encountered in the optimization procedures typical in alternative methodologies requiring open-loop input/output measurement data collection and parametric system identification. Then with a conservative approximation of probabilistic chance constraints for the MPC problem, a resulting deterministic optimization problem is formulated and solved efficiently and effectively. In the work here, this intuitive data-driven approach is also shown to preserve good robustness properties. Finally, a multi-drone system is used to demonstrate the practical appeal and highly effective outcome of this promising development in achieving very good system performance.Comment: 10 pages, 6 figure

    Data-driven modeling and complexity reduction for nonlinear systems with stability guarantees

    Get PDF

    Introduction

    Get PDF

    Model-Free μ\mu Synthesis via Adversarial Reinforcement Learning

    Full text link
    Motivated by the recent empirical success of policy-based reinforcement learning (RL), there has been a research trend studying the performance of policy-based RL methods on standard control benchmark problems. In this paper, we examine the effectiveness of policy-based RL methods on an important robust control problem, namely μ\mu synthesis. We build a connection between robust adversarial RL and μ\mu synthesis, and develop a model-free version of the well-known DKDK-iteration for solving state-feedback μ\mu synthesis with static DD-scaling. In the proposed algorithm, the KK step mimics the classical central path algorithm via incorporating a recently-developed double-loop adversarial RL method as a subroutine, and the DD step is based on model-free finite difference approximation. Extensive numerical study is also presented to demonstrate the utility of our proposed model-free algorithm. Our study sheds new light on the connections between adversarial RL and robust control.Comment: Accepted to ACC 202

    Adaptive Output Feedback Model Predictive Control

    Full text link
    Model predictive control (MPC) for uncertain systems in the presence of hard constraints on state and input is a non-trivial problem, and the challenge is increased manyfold in the absence of state measurements. In this paper, we propose an adaptive output feedback MPC technique, based on a novel combination of an adaptive observer and robust MPC, for single-input single-output discrete-time linear time-invariant systems. At each time instant, the adaptive observer provides estimates of the states and the system parameters that are then leveraged in the MPC optimization routine while robustly accounting for the estimation errors. The solution to the optimization problem results in a homothetic tube where the state estimate trajectory lies. The true state evolves inside a larger outer tube obtained by augmenting a set, invariant to the state estimation error, around the homothetic tube sections. The proof for recursive feasibility for the proposed `homothetic and invariant' two-tube approach is provided, along with simulation results on an academic system.Comment: 6 page
    • …
    corecore