84 research outputs found

    Improving wind power forecasts: combination through multivariate dimension reduction techniques

    Full text link
    Wind energy and wind power forecast errors have a direct impact on operational decision problems involved in the integration of this form of energy into the electricity system. As the relationship between wind and the generated power is highly nonlinear and time-varying, and given the increasing number of available forecasting techniques, it is possible to use alternative models to obtain more than one prediction for the same hour and forecast horizon. To increase forecast accuracy, it is possible to combine the different predictions to obtain a better one or to dynamically select the best one in each time period. Hybrid alternatives based on combining a few selected forecasts can be considered when the number of models is large. One of the most popular ways to combine forecasts is to estimate the coefficients of each prediction model based on its past forecast errors. As an alternative, we propose using multivariate reduction techniques and Markov chain models to combine forecasts. The combination is thus not directly based on the forecast errors. We show that the proposed combination strategies based on dimension reduction techniques provide competitive forecasting results in terms of the Mean Square ErrorThe second author, Pilar Poncela, acknowledges financial support from the Spanish Government, Ministry of Science, contract grant PID2019-108079GB-C22/AEI/10.13039/50110001103

    Forecasting monthly us consumer price indexes through a disaggregated I(2) analysis

    Get PDF
    In this paper we carry a disaggregated study of the monthly US Consumer Price Index (CPI). We consider a breakdown of US CPI in four subindexes, corresponding to four groups of markets: energy, food, rest of commodities and rest of services. This is seen as a relevant way to increase information in forecasting US CPI because the supplies and demands in those markets have very different characteristics. Consumer prices in the last three components show I(2) behavior, while the energy subindex shows a lower order of integration, but with segmentation in the growth rate. Even restricting the analysis to the series that show the same order of integration, the trending behavior of prices in these markets can be very different. An I(2) cointegration analysis on the mentioned last three components shows that there are several sources of nonstationarity in the US CPI components. A common trend analysis based on dynamic factor models confirms these results. The different trending behavior in the market prices suggests that theories for price determinations could differ through markets. In this context, disaggregation could help to improve forecasting accuracy. To show that this conjecture is valid for the non-energy US CPI, we have performed a forecasting exercise of each component, computed afterwards the aggregated value of the non energy US CPI and compared it with the forecasts obtained directly from a model for the aggregate. The improvement in one year ahead forecasts with the disaggregated approach is more than 20%, where the root mean squared error is employed as a measure of forecasting performance

    Eigenstructure of nonstationary factor models

    Get PDF
    In this paper we present a generalized dynamic factor model for a vector of time series which seems to provide a general framework to incorporate all the common information included in a collection of variables. The common dynamic structure is explained through a set of common factors, which may be stationary or nonstationary, as in the case of cornmon trends. AIso, it may exist a specific structure for each variable. Identification of the nonstationary I(d) factors is made through the cornmon eigenstructure of the generalized covariance matrices, properly normalized. The number of common trends, or in general I(d) factors, is the number of nonzero eigenvalues of the above matrices. It is also proved that these nonzero eigenvalues are strictIy greater than zero almost sure. Randomness appears in the eigenvalues as well as the eigenvectors, but not on the subspace spanned by the eigenvectors

    Forecasting with nostationary dynamic factor models

    Get PDF
    In this paper we analyze the structure and the forecasting performance of the dynamic factor model. It is shown that the forecasts obtained by the factor model imply shrinkage pooling terms, similar to the ones obtained from hierarchical Bayesian models that have been applied successfully in the econometric literature. Thus, the results obtained in this paper provide an additional justification for these and other types of pooling procedures. The expected decrease in MSE f or using a factor model versus univariate ARIMA models, shrinkage univariate models or vector ARMA models are studied f or the one factor model. It is proved that some substantial gains can be obtained in some cases with respect to the univariate forecasting. Monte Carlo simulations are presented to illustrate this result. A factor model is built to forecast GNP of European countries and it is shown that the factor model provides better forecasts than both univariate and shrinkage univariate forecasts

    Pooling information and forecasting with dynamic factor analysis

    Get PDF
    In this paper, we present a generalized dynamic factor model for a vector of time series, which seems to provide a general framework to incorporate all the common information included in a collection of variables. The common dynamic structure is explained through a set of common factors, which may be stationary, or nonstationary as in the case of common trends. Also, it may exist a specific structure for each variable. Identification of the non stationary factors is made through the common eigenstructure of the lagged co variance matrices. Estimation of the model is carried out in state space form with the EM algorithm, where the Kalman filter is used to estimate the factors or not observable variables. It is shown that this approach implies, as particular cases, many pooled forecasting procedures suggested in the literature. In particular, it offers an explanation to the empirical fact that the forecasting performance of a time series vector is improved when the overall mean is incorporated into the forecast equation for each component

    A two factor model to combine US inflation forecasts

    Full text link
    The combination of individual forecasts is often a useful tool to improve forecast accuracy. The most commonly used technique for forecast combination is the mean, and it has frequently proven hard to beat. This paper considers factor analysis to combine US inflation forecasts showing that just one factor is not enough to beat the mean and that the second one is necessary. The first factor is usually a weighted mean of the variables and it can be interpreted as a consensus forecast, while the second factor generally provides the differences among the variables and, since our observations are forecasts, it may be related with the dispersion in the forecasting expectations and in a sense with its uncertainty. Within this approach, the paper also revisits Friedman's hypothesis relating the level of inflation with uncertainty in expectations at the beginning of the 21st century

    Sparse partial least squares in time series for macroeconomic forecasting

    Get PDF
    Factor models have been applied extensively for forecasting when high dimensional datasets are available. In this case, the number of variables can be very large. For instance, usual dynamic factor models in central banks handle over 100 variables. However, there is a growing body of the literature that indicates that more variables do not necessarily lead to estimated factors with lower uncertainty or better forecasting results. This paper investigates the usefulness of partial least squares techniques, that take into account the variable to be forecasted when reducing the dimension of the problem from a large number of variables to a smaller number of factors. We propose different approaches of dynamic sparse partial least squares as a means of improving forecast efficiency by simultaneously taking into account the variable forecasted while forming an informative subset of predictors, instead of using all the available ones to extract the factors. We use the well-known Stock and Watson database to check the forecasting performance of our approach. The proposed dynamic sparse models show a good performance in improving the efficiency compared to widely used factor methods in macroeconomic forecasting.Pilar Poncela and Julio RodrĂ­guez acknowledge financial support from the Spanish Ministry of Education, contract grant ECO2009-1028

    Measuring uncertainty and assessing its predictive power in the euro area

    Get PDF
    Expectations and uncertainty play a key role in economic behavior. This paper deals with both, expectations and uncertainty derived from the European Central Bank Survey of Professional Forecasters. Given the strong turbulences that the euro area macroeconomic indicators observe since 2007, the aim of the paper is to check whether there is any room for improvement of the consensus forecast accuracy for GDP growth and inflation when accounting for uncertainty. We propose a new measure of uncertainty, alternative to the ad hoc equal weights commonly used, based on principal components. We test the role of uncertainty in forecasting macroeconomic performance in the euro area between 2005 and 2015. We also check the role of surprises in the considered forecasting sampleMinisterio de Economía y CompetitividadFinancial support from the Spanish Ministry of Economy and Competitiveness, project numbers ECO2015-70331-C2-1-R, ECO2015-66593-P and ECO2014-56676C2-2-P and Universidad de Alcalá is acknowledged

    Selecting and combining experts from survey forecasts

    Get PDF
    Combining multiple forecasts provides gains in prediction accuracy. Therefore, with the aim of finding an optimal weighting scheme, several combination techniques have been proposed in the forecasting literature. In this paper we propose the use of sparse partial least squares (SPLS) as a method to combine selected individual forecasts from economic surveys. SPLS chooses the forecasters with more predictive power about the target variable, discarding the panelists with redundant information. We employ the Survey of Professional Forecasters dataset to explore the performance of different methods for combining forecasts: average forecasts, trimmed mean, regression based methods and regularized methods also in regression. The results show that selecting and combining forecasts yields to improvements in forecasting accuracy compared to the hard to beat average of forecasters

    A fragmented-periodogram approach for clustering big data time series

    Get PDF
    We propose and study a new frequency-domain procedure for characterizing and comparing large sets of long time series. Instead of using all the information available from data, which would be computationally very expensive, we propose some regularization rules in order to select and summarize the most relevant information for clustering purposes. Essentially, we suggest to use a fragmented periodogram computed around the driving cyclical components of interest and to compare the various estimates. This procedure is computationally simple, but able to condense relevant information of the time series. A simulation exercise shows that the smoothed fragmented periodogram works in general better than the non-smoothed one and not worse than the complete periodogram for medium to large sample sizes. We illustrate this procedure in a study of the evolution of several stock markets indices. We further show the effect of recent financial crises over these indices behaviour.Jorge Caiado and Nuno Crato have been supported through Project CEMAPREUID/ MULTI/00491/2019 financed by FCT/MCTES through national funds. This paper was finished when the Pilar Poncela returned to Universidad AutĂłnoma de Madrid. Partial financial support from the Spanish Ministry of Economy and Competitiveness, Project ECO2015-70331-C2-1R and from Comunidad de Madrid, Project MadEco-CM S2015/HUM-3444 is acknowledged. Pilar Poncela, fellow of the UC3M-BS Institute of Financial Big Data (IFIBID), also thanks the Institute. Finally, we also thank Michela Nardo for providing the dat
    • …
    corecore