6 research outputs found
LQG Online Learning
Optimal control theory and machine learning techniques are combined to
formulate and solve in closed form an optimal control formulation of online
learning from supervised examples with regularization of the updates. The
connections with the classical Linear Quadratic Gaussian (LQG) optimal control
problem, of which the proposed learning paradigm is a non-trivial variation as
it involves random matrices, are investigated. The obtained optimal solutions
are compared with the Kalman-filter estimate of the parameter vector to be
learned. It is shown that the proposed algorithm is less sensitive to outliers
with respect to the Kalman estimate (thanks to the presence of the
regularization term), thus providing smoother estimates with respect to time.
The basic formulation of the proposed online-learning framework refers to a
discrete-time setting with a finite learning horizon and a linear model.
Various extensions are investigated, including the infinite learning horizon
and, via the so-called "kernel trick", the case of nonlinear models
Linear Quadratic Gaussian (LQG) online learning
Optimal control theory and machine learning techniques are combined to propose and solve in closed form an optimal control formulation of online learning from supervised examples. The connections with the classical Linear Quadratic Gaussian (LQG) optimal control problem, of which the proposed learning paradigm is a non trivial variation as it involves random matrices, are investigated. The obtained optimal solutions are compared with the Kalman-filter estimate of the parameter vector to be learned. It is shown that the former enjoys larger smoothness and robustness to outliers, thanks to the presence of a regularization term. The basic formulation of the proposed online-learning framework refers to a discrete time setting with a finite learning horizon and a linear model. Various extensions are investigated, including the infinite learning horizon and, via the so-called "kernel trick", the case of nonlinear models.
Subjects: Optimization and Control (math.OC)
Cite as: arXiv:1606.04272 [math.OC]
(or arXiv:1606.04272v2 [math.OC] for this version
Online learning as an LQG optimal control problem with random matrices
In this paper, we combine optimal control theory and machine learning techniques to propose and solve an optimal control formulation of online learning from supervised examples, which are used to learn an unknown vector parameter modeling the relationship between the input examples and their outputs. We show some connections of the problem investigated with the classical LQG optimal control problem, of which the proposed problem is a non-trivial variation, as it involves random matrices. We also compare the optimal solution to the proposed problem with the Kalman-filter estimate of the parameter vector to be learned, demonstrating its larger smoothness and robustness to outliers. Extension of the proposed online-learning framework are mentioned at the end of the paper
Online Learning as an LQG Optimal Control Problem with Random Matrices
In this paper, we combine optimal control theory and machine learning techniques to propose and solve an optimal control formulation of online learning from supervised examples, which are used to learn an unknown vector parameter modeling the relationship between the input examples and their outputs. We show some connections of the problem investigated with the classical LQG optimal control problem, of which the proposed problem is a non-trivial variation, as it involves random matrices. We also compare the optimal solution to the proposed problem with the Kalman-filter estimate of the parameter vector to be learned, demonstrating its larger smoothness and robustness to outliers. Extension of the proposed online-learning framework are mentioned at the end of
the paper