7,630 research outputs found

    Connections Between Adaptive Control and Optimization in Machine Learning

    Full text link
    This paper demonstrates many immediate connections between adaptive control and optimization methods commonly employed in machine learning. Starting from common output error formulations, similarities in update law modifications are examined. Concepts in stability, performance, and learning, common to both fields are then discussed. Building on the similarities in update laws and common concepts, new intersections and opportunities for improved algorithm analysis are provided. In particular, a specific problem related to higher order learning is solved through insights obtained from these intersections.Comment: 18 page

    Contracting Nonlinear Observers: Convex Optimization and Learning from Data

    Full text link
    A new approach to design of nonlinear observers (state estimators) is proposed. The main idea is to (i) construct a convex set of dynamical systems which are contracting observers for a particular system, and (ii) optimize over this set for one which minimizes a bound on state-estimation error on a simulated noisy data set. We construct convex sets of continuous-time and discrete-time observers, as well as contracting sampled-data observers for continuous-time systems. Convex bounds for learning are constructed using Lagrangian relaxation. The utility of the proposed methods are verified using numerical simulation.Comment: conference submissio

    Parameter Estimation of Sigmoid Superpositions: Dynamical System Approach

    Full text link
    Superposition of sigmoid function over a finite time interval is shown to be equivalent to the linear combination of the solutions of a linearly parameterized system of logistic differential equations. Due to the linearity with respect to the parameters of the system, it is possible to design an effective procedure for parameter adjustment. Stability properties of this procedure are analyzed. Strategies shown in earlier studies to facilitate learning such as randomization of a learning sequence and adding specially designed disturbances during the learning phase are requirements for guaranteeing convergence in the learning scheme proposed.Comment: 30 pages, 7 figure

    Discrete and Continuous-time Soft-Thresholding with Dynamic Inputs

    Full text link
    There exist many well-established techniques to recover sparse signals from compressed measurements with known performance guarantees in the static case. However, only a few methods have been proposed to tackle the recovery of time-varying signals, and even fewer benefit from a theoretical analysis. In this paper, we study the capacity of the Iterative Soft-Thresholding Algorithm (ISTA) and its continuous-time analogue the Locally Competitive Algorithm (LCA) to perform this tracking in real time. ISTA is a well-known digital solver for static sparse recovery, whose iteration is a first-order discretization of the LCA differential equation. Our analysis shows that the outputs of both algorithms can track a time-varying signal while compressed measurements are streaming, even when no convergence criterion is imposed at each time step. The L2-distance between the target signal and the outputs of both discrete- and continuous-time solvers is shown to decay to a bound that is essentially optimal. Our analyses is supported by simulations on both synthetic and real data.Comment: 18 pages, 7 figures, journa

    Mathematical control of complex systems

    Get PDF
    Copyright © 2013 ZidongWang et al.This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited
    • …
    corecore