7,364 research outputs found
Does money matter in inflation forecasting?.
This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two non-linear techniques, namely, recurrent neural networks and kernel recursive least squares regression - techniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a naive random walk model. The best models were non-linear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation
Forecasting monthly airline passenger numbers with small datasets using feature engineering and a modified principal component analysis
In this study, a machine learning approach based on time series models, different feature engineering, feature extraction, and feature derivation is proposed to improve air passenger forecasting. Different types of datasets were created to extract new features from the core data. An experiment was undertaken with artificial neural networks to test the performance of neurons in the hidden layer, to optimise the dimensions of all layers and to obtain an optimal choice of connection weights – thus the nonlinear optimisation problem could be solved directly. A method of tuning deep learning models using H2O (which is a feature-rich, open source machine learning platform known for its R and Spark integration and its ease of use) is also proposed, where the trained network model is built from samples of selected features from the dataset in order to ensure diversity of the samples and to improve training. A successful application of deep learning requires setting numerous parameters in order to achieve greater model accuracy. The number of hidden layers and the number of neurons, are key parameters in each layer of such a network. Hyper-parameter, grid search, and random hyper-parameter approaches aid in setting these important parameters. Moreover, a new ensemble strategy is suggested that shows potential to optimise parameter settings and hence save more computational resources throughout the tuning process of the models. The main objective, besides improving the performance metric, is to obtain a distribution on some hold-out datasets that resemble the original distribution of the training data. Particular attention is focused on creating a modified version of Principal Component Analysis (PCA) using a different correlation matrix – obtained by a different correlation coefficient based on kinetic energy to derive new features. The data were collected from several airline datasets to build a deep prediction model for forecasting airline passenger numbers. Preliminary experiments show that fine-tuning provides an efficient approach for tuning the ultimate number of hidden layers and the number of neurons in each layer when compared with the grid search method. Similarly, the results show that the modified version of PCA is more effective in data dimension reduction, classes reparability, and classification accuracy than using traditional PCA.</div
Highly-Accurate Electricity Load Estimation via Knowledge Aggregation
Mid-term and long-term electric energy demand prediction is essential for the
planning and operations of the smart grid system. Mainly in countries where the
power system operates in a deregulated environment. Traditional forecasting
models fail to incorporate external knowledge while modern data-driven ignore
the interpretation of the model, and the load series can be influenced by many
complex factors making it difficult to cope with the highly unstable and
nonlinear power load series. To address the forecasting problem, we propose a
more accurate district level load prediction model Based on domain knowledge
and the idea of decomposition and ensemble. Its main idea is three-fold: a)
According to the non-stationary characteristics of load time series with
obvious cyclicality and periodicity, decompose into series with actual economic
meaning and then carry out load analysis and forecast. 2) Kernel Principal
Component Analysis(KPCA) is applied to extract the principal components of the
weather and calendar rule feature sets to realize data dimensionality
reduction. 3) Give full play to the advantages of various models based on the
domain knowledge and propose a hybrid model(XASXG) based on Autoregressive
Integrated Moving Average model(ARIMA), support vector regression(SVR) and
Extreme gradient boosting model(XGBoost). With such designs, it accurately
forecasts the electricity demand in spite of their highly unstable
characteristic. We compared our method with nine benchmark methods, including
classical statistical models as well as state-of-the-art models based on
machine learning, on the real time series of monthly electricity demand in four
Chinese cities. The empirical study shows that the proposed hybrid model is
superior to all competitors in terms of accuracy and prediction bias
A neural network ensemble approach for GDP forecasting
We propose an ensemble learning methodology to forecast the future US GDP
growth release. Our approach combines a Recurrent Neural Network (RNN) with
a Dynamic Factor model accounting for time-variation in mean with a General-
ized Autoregressive Score (DFM-GAS). The analysis is based on a set of predictors
encompassing a wide range of variables measured at different frequencies. The
forecast exercise is aimed at evaluating the predictive ability of each model's com-
ponent of the ensemble by considering variations in mean, potentially caused by
recessions affecting the economy. Thus, we show how the combination of RNN and
DFM-GAS improves forecasts of the US GDP growth rate in the aftermath of the
2008-09 global financial crisis. We find that a neural network ensemble markedly
reduces the root mean squared error for the short-term forecast horizon
- …