research

LQG Online Learning

Abstract

Optimal control theory and machine learning techniques are combined to formulate and solve in closed form an optimal control formulation of online learning from supervised examples with regularization of the updates. The connections with the classical Linear Quadratic Gaussian (LQG) optimal control problem, of which the proposed learning paradigm is a non-trivial variation as it involves random matrices, are investigated. The obtained optimal solutions are compared with the Kalman-filter estimate of the parameter vector to be learned. It is shown that the proposed algorithm is less sensitive to outliers with respect to the Kalman estimate (thanks to the presence of the regularization term), thus providing smoother estimates with respect to time. The basic formulation of the proposed online-learning framework refers to a discrete-time setting with a finite learning horizon and a linear model. Various extensions are investigated, including the infinite learning horizon and, via the so-called "kernel trick", the case of nonlinear models

    Similar works