1 research outputs found

    Universal piecewise linear least squares prediction

    No full text
    Abstract β€” We consider the problem of sequential prediction of real-valued sequences using piecewise linear models under the square-error loss function. In this context, we demonstrate a sequential algorithm for prediction whose accumulated squared error for every bounded sequence is asymptotically as small as that of the best fixed predictor for that sequence taken from the class of piecewise linear predictors. We also show that this predictor is optimal in certain settings in a particular min-max sense. This approach can also be applied to the class of piecewise constant predictors, for which a similar universal sequential algorithm can be derived with corresponding min-max optimality. I. Summary In this paper, we consider the problem of predicting a sequence x n = {x[t]} n t=1 as well as the best piecewise linear predictor out of a large, continuous class of piecewise linear predictors. The real-valued sequence x n is assumed to be bounded, i.e. |x[t] | ≀A for some A<∞, for all t. Rather than assuming a statistical ensemble of sequences, and attempting to achieve optimal performance according to some statistical criterion, our goal is to predict any sequence x n as well as the best predictor out of a large class of predictors. We first consider the class of fixed scalar piecewise linear predictors as our competition class. For a scalar piecewise linear predictor, the past observation space x[t βˆ’ 1] ∈ [βˆ’A, A] is parsed into K disjoint regions Rj where ⋃K j=1 Rj =[βˆ’A, A]. At each time t, the competing predictor forms its prediction as Λ†xw j [t] = wjx[t βˆ’ 1], wj ∈ R, when x[t βˆ’ 1] ∈ Rj. We assume that the number of regions and the region boundaries are known. Here, we seek to minimize the following regret: nβˆ‘ sup x n t=1 (x[t] βˆ’ Λ†xq[t]) 2 βˆ’ in
    corecore