10 research outputs found

    New Estimation Rules for Unknown Parameters on Holt-Winters Multiplicative Method

    Get PDF
    The Holt-Winters method is a well-known forecasting method used in time-series analysis to forecast future data when a trend and seasonal pattern is detected. There are two variations, i.e. the additive and the multiplicative method. Prior study by Vercher, et al. in [1] has shown that choosing the initial conditions is very important in exponential smoothing models, including the Holt-Winters method. Accurate estimates of initial conditions can result in better forecasting results. In this research, we propose new estimation rules for initial conditions for the Holt-Winters multiplicative method. The estimation rules were derived from the original initial conditions combined with the weighted moving average method. From the experimental results it was found that the new approach of the Holt-Winters multiplicative method can outperform the original Holt-Winters multiplicative method

    Forecasting With Exponential Smoothing Whats The Right Smoothing Constant?

    Get PDF
    This paper examines exponential smoothing constants that minimize summary error measures associated with a large number of forecasts. These forecasts were made on numerous time series generated through simulation on a spreadsheet. The series varied in length and underlying nature no trend, linear trend, and nonlinear trend. Forecasts were made using simple exponential smoothing as well as exponential smoothing with trend correction and with different kinds of initial forecasts. We found that when initial forecasts were good and the nature of the underlying data did not change, smoothing constants were typically very small. Conversely, large smoothing constants indicated a change in the nature of the underlying data or the use of an inappropriate forecasting model. These results reduce the confusion about the role and right size of these constants and offer clear recommendations on how they should be discussed in classroom settings

    Determining the Optimal Values of Exponential Smoothing Constants--Does Solver Really Work?

    Get PDF
    A key issue in exponential smoothing is the choice of the values of the smoothing constants used. One approach that is becoming increasingly popular in introductory management science and operations management textbooks is the use of Solver, an Excel-based non-linear optimizer, to identify values of the smoothing constants that minimize a measure of forecast error like Mean Absolute Deviation (MAD) or Mean Squared Error (MSE). We point out some difficulties with this approach and suggest an easy fix. We examine the impact of initial forecasts on the smoothing constants and the idea of optimizing the initial forecast along with the smoothing constants. We make recommendations on the use of Solver in the context of the teaching of forecasting and suggest that there is a better method than Solver to identify the appropriate smoothing constants

    Determining The Optimal Values Of Exponential Smoothing Constants Does Solver Really Work?

    Get PDF
    A key issue in exponential smoothing is the choice of the values of the smoothing constants used.One approach that is becoming increasingly popular in introductory management science and operations management textbooks is the use of Solver, an Excel-based non-linear optimizer, to identify values of the smoothing constants that minimize a measure of forecast error like Mean Absolute Deviation (MAD) or Mean Squared Error (MSE).We point out some difficulties with this approach and suggest an easy fix. We examine the impact of initial forecasts on the smoothing constants and the idea of optimizing the initial forecast along with the smoothing constants.We make recommendations on the use of Solver in the context of the teaching of forecasting and suggest that there is a better method than Solver to identify the appropriate smoothing constants

    Determining The Optimal Values Of Exponential Smoothing Constants – Does Solver Really Work?

    Get PDF
    A key issue in exponential smoothing is the choice of the values of the smoothing constants used.  One approach that is becoming increasingly popular in introductory management science and operations management textbooks is the use of Solver, an Excel-based non-linear optimizer, to identify values of the smoothing constants that minimize a measure of forecast error like Mean Absolute Deviation (MAD) or Mean Squared Error (MSE).  We point out some difficulties with this approach and suggest an easy fix. We examine the impact of initial forecasts on the smoothing constants and the idea of optimizing the initial forecast along with the smoothing constants.  We make recommendations on the use of Solver in the context of the teaching of forecasting and suggest that there is a better method than Solver to identify the appropriate smoothing constants

    Forecasting Irregular Seasonal Power Consumption. An Application to a Hot-Dip Galvanizing Process

    Full text link
    [EN] The method described in this document makes it possible to use the techniques usually applied to load prediction efficiently in those situations in which the series clearly presents seasonality but does not maintain a regular pattern. Distribution companies use time series to predict electricity consumption. Forecasting techniques based on statistical models or artificial intelligence are used. Reliable forecasts are required for efficient grid management in terms of both supply and capacity. One common underlying feature of most demand-related time series is a strong seasonality component. However, in some cases, the electricity demanded by a process presents an irregular seasonal component, which prevents any type of forecast. In this article, we evaluated forecasting methods based on the use of multiple seasonal models: ARIMA, Holt-Winters models with discrete interval moving seasonality, and neural networks. The models are explained and applied to a real situation, for a node that feeds a galvanizing factory. The zinc hot-dip galvanizing process is widely used in the automotive sector for the protection of steel against corrosion. It requires enormous energy consumption, and this has a direct impact on companies' income statements. In addition, it significantly affects energy distribution companies, as these companies must provide for instant consumption in their supply lines to ensure sufficient energy is distributed both for the process and for all the other consumers. The results show a substantial increase in the accuracy of predictions, which contributes to a better management of the electrical distribution.Trull, O.; GarcĂ­a-DĂ­az, JC.; PeirĂł Signes, A. (2021). Forecasting Irregular Seasonal Power Consumption. An Application to a Hot-Dip Galvanizing Process. Applied Sciences. 11(1):1-24. https://doi.org/10.3390/app11010075S12411

    Stochastic Tests on Live Cattle Steer Basis Composite Forecasts

    Get PDF
    Since the seminal papers of Bates and Granger in 1969, a superfluous amount of information has been published on combining singular forecasts. Materialized evidence has habitually demonstrated that combining the forecasts will produce the best model. Moreover, while it is possible that a best singular model could outperform a composite model, using multiple models provides the advantage of risk diversification. It has also been shown to produce a lower forecasting error. The question to whether to combine has been replaced with what amount of emphasis should be placed on each forecast. Researchers are aspired to derive optimal weights that would produce the lowest forecasting errors. An equal composite of the mean square error, by the covariance, and the best previous model, among others, have been suggested. Other academicians have suggested the use of mechanical derived weights through the use of computer programs. These weights have shown robust results. Once the composite and singular forecasts have been estimated, a systematic approach to evaluate the singular forecasts is needed. Forecasting errors, such as the root mean square error and mean absolute percentage error, are the most common criteria for elimination in both agriculture and other sectors. Although a valid mean of selection, different forecasting errors can produce a different ordinal ranking of the forecasts; thus, producing inconclusive results. These findings have promoted the inspection for other suitable candidates for forecast evaluation. At the forefront of this pursuit is stochastic dominance and stochastic efficiency. Stochastic dominance and stochastic efficiency have traditionally been used as a way to rank wealth or returns from a group of alternatives. They have been principally used in the finance and money sector as a way to evaluate investment strategies. Holt and Brandt in 1985 proposed using stochastic dominance to select between different hedging strategies. Their results suggest that stochastic dominance has the opportunity to feasibly be used in selecting the most accurate forecast. This thesis had three objectives: 1) To determine whether live cattle basis forecasting error could be reduced in comparison to singular models when using composite forecasts 2) To determine whether stochastic dominance and stochastic efficiency could be used to systematically select the most accurate forecasts 3) To determine whether currently reported forecasting error measures might lead to inaccurate conclusions in which forecast was correct. The objectives were evaluated using two primary markets, Utah and Western Kansas, and two secondary markets, Texas and Nebraska. The data for live cattle slaughter steer basis was taken and subsequently computed from the Livestock Marketing Information Center, Chicago Mercantile Exchange, and United States Department of Agriculture from 2004 to 2012. Seven singular were initially used and adapted from the current academic literature. After the models were evaluated using forecasting error, stochastic dominance and stochastic efficiency, seven composite models were created. For each separate composite model, a different weighting scheme was applied. The “optimal” composite weight, in particular, was estimated using GAMS whose objective function was to select the forecast combination that would reduce the variance-covariance between the singular forecasting models. The composite models were likewise systematically evaluated using forecasting error, stochastic dominance and stochastic efficiency. The results indicate that forecasting error can be reduced in all four markets, on the average by using an optimal weighting scheme. Optimal weighting schemes can also outperform the benchmark equal weights. Moreover, a combination of fast reaction time series and market condition, supply and demand, forecasts provide the better model. Stochastic dominance and stochastic efficiency provided confirmatory results and selected the efficient set of the forecasts over a range of risk. It likewise indicated that forecasting error may provide a point estimate rather than a range of error. Suggestions for their application and implementation into extension outlook forecasts and industry application are suggested

    Advanced Methods of Power Load Forecasting

    Get PDF
    This reprint introduces advanced prediction models focused on power load forecasting. Models based on artificial intelligence and more traditional approaches are shown, demonstrating the real possibilities of use to improve prediction in this field. Models of LSTM neural networks, LSTM networks with a SESDA architecture, in even LSTM-CNN are used. On the other hand, multiple seasonal Holt-Winters models with discrete seasonality and the application of the Prophet method to demand forecasting are presented. These models are applied in different circumstances and show highly positive results. This reprint is intended for both researchers related to energy management and those related to forecasting, especially power load

    Optimising time series forecasts through linear programming

    Get PDF
    This study explores the usage of linear programming (LP) as a tool to optimise the parameters of time series forecasting models. LP is the most well-known tool in the field of operational research and it has been used for a wide range of optimisation problems. Nonetheless, there are very few applications in forecasting and all of them are limited to causal modelling. The rationale behind this study is that time series forecasting problems can be treated as optimisation problems, where the objective is to minimise the forecasting error. The research topic is very interesting from a theoretical and mathematical prospective. LP is a very strong tool but simple to use; hence, an LP-based approach will give to forecasters the opportunity to do accurate forecasts quickly and easily. In addition, the flexibility of LP can help analysts to deal with situations that other methods cannot deal with. The study consists of five parts where the parameters of forecasting models are estimated by using LP to minimise one or more accuracy (error) indices (sum of absolute deviations – SAD, sum of absolute percentage errors – SAPE, maximum absolute deviation – MaxAD, absolute differences between deviations – ADBD and absolute differences between percentage deviations – ADBPD). In order to test the accuracy of the approaches two samples of series from the M3 competition are used and the results are compared with traditional techniques that are found in the literature. In the first part simple LP is used to estimate the parameters of autoregressive based forecasting models by minimising one error index and they are compared with the method of the ordinary least squares (OLS minimises the sum of squared errors, SSE). The experiments show that the decision maker has to choose the best optimisation objective according to the characteristic of the series. In the second part, goal programming (GP) formulations are applied to similar models by minimising a combination of two accuracy indices. The experiments show that goal programming improves the performance of the single objective approaches. In the third part, several constraints to the initial simple LP and GP formulations are added to improve their performance on series with high randomness and their accuracy is compared with techniques that perform well on these series. The additional constraints improve the results and outperform all the other techniques. In the fourth part, simple LP and GP are used to combine forecasts. Eight simple individual techniques are combined and LP is compared with five traditional combination methods. The LP combinations outperform the other methods according to several performance indices. Finally, LP is used to estimate the parameters of autoregressive based models with optimisation objectives to minimise forecasting cost and it is compared them with the OLS. The experiments show that LP approaches perform better in terms of cost. The research shows that LP is a very useful tool that can be used to make accurate time series forecasts, which can outperform the traditional approaches that are found in forecasting literature and in practise
    corecore