3,274 research outputs found

    On particle filters applied to electricity load forecasting

    Get PDF
    We are interested in the online prediction of the electricity load, within the Bayesian framework of dynamic models. We offer a review of sequential Monte Carlo methods, and provide the calculations needed for the derivation of so-called particles filters. We also discuss the practical issues arising from their use, and some of the variants proposed in the literature to deal with them, giving detailed algorithms whenever possible for an easy implementation. We propose an additional step to help make basic particle filters more robust with regard to outlying observations. Finally we use such a particle filter to estimate a state-space model that includes exogenous variables in order to forecast the electricity load for the customers of the French electricity company \'Electricit\'e de France and discuss the various results obtained

    Linear State Models for Volatility Estimation and Prediction

    Get PDF
    This report covers the important topic of stochastic volatility modelling with an emphasis on linear state models. The approach taken focuses on comparing models based on their ability to fit the data and their forecasting performance. To this end several parsimonious stochastic volatility models are estimated using realised volatility, a volatility proxy from high frequency stock price data. The results indicate that a hidden state space model performs the best among the realised volatility-based models under consideration. For the state space model different sampling intervals are compared based on in-sample prediction performance. The comparisons are partly based on the multi-period prediction results that are derived in this report

    Predictive-State Decoders: Encoding the Future into Recurrent Networks

    Full text link
    Recurrent neural networks (RNNs) are a vital modeling technique that rely on internal states learned indirectly by optimization of a supervised, unsupervised, or reinforcement training loss. RNNs are used to model dynamic processes that are characterized by underlying latent states whose form is often unknown, precluding its analytic representation inside an RNN. In the Predictive-State Representation (PSR) literature, latent state processes are modeled by an internal state representation that directly models the distribution of future observations, and most recent work in this area has relied on explicitly representing and targeting sufficient statistics of this probability distribution. We seek to combine the advantages of RNNs and PSRs by augmenting existing state-of-the-art recurrent neural networks with Predictive-State Decoders (PSDs), which add supervision to the network's internal state representation to target predicting future observations. Predictive-State Decoders are simple to implement and easily incorporated into existing training pipelines via additional loss regularization. We demonstrate the effectiveness of PSDs with experimental results in three different domains: probabilistic filtering, Imitation Learning, and Reinforcement Learning. In each, our method improves statistical performance of state-of-the-art recurrent baselines and does so with fewer iterations and less data.Comment: NIPS 201
    corecore