4,811 research outputs found

    Modelling the influence of non-minimum phase zeros on gradient based linear iterative learning control

    Get PDF
    The subject of this paper is modeling of the influence of non-minimum phase plant dynamics on the performance possible from gradient based norm optimal iterative learning control algorithms. It is established that performance in the presence of right-half plane plant zeros typically has two phases. These consist of an initial fast monotonic reduction of the L2 error norm followed by a very slow asymptotic convergence. Although the norm of the tracking error does eventually converge to zero, the practical implications over finite trials is apparent convergence to a non-zero error. The source of this slow convergence is identified and a model of this behavior as a (set of) linear constraint(s) is developed. This is shown to provide a good prediction of the magnitude of error norm where slow convergence begins. Formulae for this norm are obtained for single-input single-output systems with several right half plane zeroes using Lagrangian techniques and experimental results are given that confirm the practical validity of the analysis

    FALKON: An Optimal Large Scale Kernel Method

    Get PDF
    Kernel methods provide a principled way to perform non linear, nonparametric learning. They rely on solid functional analytic foundations and enjoy optimal statistical properties. However, at least in their basic form, they have limited applicability in large scale scenarios because of stringent computational requirements in terms of time and especially memory. In this paper, we take a substantial step in scaling up kernel methods, proposing FALKON, a novel algorithm that allows to efficiently process millions of points. FALKON is derived combining several algorithmic principles, namely stochastic subsampling, iterative solvers and preconditioning. Our theoretical analysis shows that optimal statistical accuracy is achieved requiring essentially O(n)O(n) memory and O(nn)O(n\sqrt{n}) time. An extensive experimental analysis on large scale datasets shows that, even with a single machine, FALKON outperforms previous state of the art solutions, which exploit parallel/distributed architectures.Comment: NIPS 201

    Multivariable norm optimal iterative learning control with auxiliary optimization

    No full text
    The paper describes a substantial extension of Norm Optimal Iterative Learning Control (NOILC) that permits tracking of a class of finite dimensional reference signals whilst simultaneously converging to the solution of a constrained quadratic optimization problem. The theory is presented in a general functional analytical framework using operators between chosen real Hilbert spaces. This is applied to solve problems in continuous time where tracking is only required at selected intermediate points of the time interval but, simultaneously, the solution is required to minimize a specified quadratic objective function of the input signals and chosen auxiliary (state) variables. Applications to the discrete time case, including the case of multi-rate sampling, are also summarized. The algorithms are motivated by practical need and provide a methodology for reducing undesirable effects such as payload spillage, vibration tendencies and actuator wear whilst maintaining the desired tracking accuracy necessary for task completion. Solutions in terms of NOILC methodologies involving both feedforward and feedback components offer the possibilities of greater robustness than purely feedforward actions. Robustness of the feedforward implementation is discussed and the work is illustrated by experimental results from a robotic manipulator

    Stochastic optimization methods for the simultaneous control of parameter-dependent systems

    Full text link
    We address the application of stochastic optimization methods for the simultaneous control of parameter-dependent systems. In particular, we focus on the classical Stochastic Gradient Descent (SGD) approach of Robbins and Monro, and on the recently developed Continuous Stochastic Gradient (CSG) algorithm. We consider the problem of computing simultaneous controls through the minimization of a cost functional defined as the superposition of individual costs for each realization of the system. We compare the performances of these stochastic approaches, in terms of their computational complexity, with those of the more classical Gradient Descent (GD) and Conjugate Gradient (CG) algorithms, and we discuss the advantages and disadvantages of each methodology. In agreement with well-established results in the machine learning context, we show how the SGD and CSG algorithms can significantly reduce the computational burden when treating control problems depending on a large amount of parameters. This is corroborated by numerical experiments
    corecore