31 research outputs found
Online Learning for Time Series Prediction
In this paper we address the problem of predicting a time series using the
ARMA (autoregressive moving average) model, under minimal assumptions on the
noise terms. Using regret minimization techniques, we develop effective online
learning algorithms for the prediction problem, without assuming that the noise
terms are Gaussian, identically distributed or even independent. Furthermore,
we show that our algorithm's performances asymptotically approaches the
performance of the best ARMA model in hindsight.Comment: 17 pages, 6 figure
On-Line Learning of Linear Dynamical Systems: Exponential Forgetting in Kalman Filters
Kalman filter is a key tool for time-series forecasting and analysis. We show
that the dependence of a prediction of Kalman filter on the past is decaying
exponentially, whenever the process noise is non-degenerate. Therefore, Kalman
filter may be approximated by regression on a few recent observations.
Surprisingly, we also show that having some process noise is essential for the
exponential decay. With no process noise, it may happen that the forecast
depends on all of the past uniformly, which makes forecasting more difficult.
Based on this insight, we devise an on-line algorithm for improper learning
of a linear dynamical system (LDS), which considers only a few most recent
observations. We use our decay results to provide the first regret bounds
w.r.t. to Kalman filters within learning an LDS. That is, we compare the
results of our algorithm to the best, in hindsight, Kalman filter for a given
signal. Also, the algorithm is practical: its per-update run-time is linear in
the regression depth
Aggregation of predictors for nonstationary sub-linear processes and online adaptive forecasting of time varying autoregressive processes
In this work, we study the problem of aggregating a finite number of
predictors for nonstationary sub-linear processes. We provide oracle
inequalities relying essentially on three ingredients: (1) a uniform bound of
the norm of the time varying sub-linear coefficients, (2) a Lipschitz
assumption on the predictors and (3) moment conditions on the noise appearing
in the linear representation. Two kinds of aggregations are considered giving
rise to different moment conditions on the noise and more or less sharp oracle
inequalities. We apply this approach for deriving an adaptive predictor for
locally stationary time varying autoregressive (TVAR) processes. It is obtained
by aggregating a finite number of well chosen predictors, each of them enjoying
an optimal minimax convergence rate under specific smoothness conditions on the
TVAR coefficients. We show that the obtained aggregated predictor achieves a
minimax rate while adapting to the unknown smoothness. To prove this result, a
lower bound is established for the minimax rate of the prediction risk for the
TVAR process. Numerical experiments complete this study. An important feature
of this approach is that the aggregated predictor can be computed recursively
and is thus applicable in an online prediction context.Comment: Published at http://dx.doi.org/10.1214/15-AOS1345 in the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Online Dynamics Learning for Predictive Control with an Application to Aerial Robots
In this work, we consider the task of improving the accuracy of dynamic
models for model predictive control (MPC) in an online setting. Even though
prediction models can be learned and applied to model-based controllers, these
models are often learned offline. In this offline setting, training data is
first collected and a prediction model is learned through an elaborated
training procedure. After the model is trained to a desired accuracy, it is
then deployed in a model predictive controller. However, since the model is
learned offline, it does not adapt to disturbances or model errors observed
during deployment. To improve the adaptiveness of the model and the controller,
we propose an online dynamics learning framework that continually improves the
accuracy of the dynamic model during deployment. We adopt knowledge-based
neural ordinary differential equations (KNODE) as the dynamic models, and use
techniques inspired by transfer learning to continually improve the model
accuracy. We demonstrate the efficacy of our framework with a quadrotor robot,
and verify the framework in both simulations and physical experiments. Results
show that the proposed approach is able to account for disturbances that are
possibly time-varying, while maintaining good trajectory tracking performance.Comment: 8 pages, 4 figure