35,459 research outputs found

    Studying the Performance of Cognitive Models in Time Series Forecasting

    Get PDF
    Cognitive models have been paramount for modeling phenomena for which empirical data are unavailable, scarce, or only partially relevant. These approaches are based on methods dedicated to preparing experts and then to elicit their opinions about the variables that describe the phenomena under study. In time series forecasting exercises, elicitation processes seek to obtain accurate estimates, overcoming human heuristic biases, while being less time consuming. This paper aims to compare the performance of cognitive and mathematical time series predictors, regarding accuracy. The results are based on the comparison of predictors of the cognitive and mathematical models for several time series from the M3-Competition. From the results, one can see that cognitive models are, at least, as accurate as ARIMA models predictions

    Development and Validation of a Rule-based Time Series Complexity Scoring Technique to Support Design of Adaptive Forecasting DSS

    Get PDF
    Evidence from forecasting research gives reason to believe that understanding time series complexity can enable design of adaptive forecasting decision support systems (FDSSs) to positively support forecasting behaviors and accuracy of outcomes. Yet, such FDSS design capabilities have not been formally explored because there exists no systematic approach to identifying series complexity. This study describes the development and validation of a rule-based complexity scoring technique (CST) that generates a complexity score for time series using 12 rules that rely on 14 features of series. The rule-based schema was developed on 74 series and validated on 52 holdback series using well-accepted forecasting methods as benchmarks. A supporting experimental validation was conducted with 14 participants who generated 336 structured judgmental forecasts for sets of series classified as simple or complex by the CST. Benchmark comparisons validated the CST by confirming, as hypothesized, that forecasting accuracy was lower for series scored by the technique as complex when compared to the accuracy of those scored as simple. The study concludes with a comprehensive framework for design of FDSS that can integrate the CST to adaptively support forecasters under varied conditions of series complexity. The framework is founded on the concepts of restrictiveness and guidance and offers specific recommendations on how these elements can be built in FDSS to support complexity

    Commentary on the Makridakis Time Series Competition (M- Competition)

    Get PDF
    In 1982, the Journal of Forecasting published the results of a forecasting competition organized by Spyros Makridakis (Makridakis et al., 1982). In this, the ex ante forecast errors of 21 methods were compared for forecasts of a variety of economic time series, generally using 1001 time series. Only extrapolative methods were used, as no data were available on causal variables. The accuracies of methods were compared using a variety of accuracy measures for different types of data and for varying forecast horizons. The original paper did not contain much interpretation or discussion. Partly this was by design, to be unbiased in the presentation. A more important factor, however, was the difficulty in gaining consensus on interpretation and presentation among the diverse group of authors, many of whom have a vested interest in certain methods. In the belief that this study was of major importance, we decided to obtain a more complete discussion of the results. We do not believe that the data speak for themselves.Makridakis, commentary, time series competition, m competition

    Forecasting of financial data: a novel fuzzy logic neural network based on error-correction concept and statistics

    Get PDF
    First, this paper investigates the effect of good and bad news on volatility in the BUX return time series using asymmetric ARCH models. Then, the accuracy of forecasting models based on statistical (stochastic), machine learning methods, and soft/granular RBF network is investigated. To forecast the high-frequency financial data, we apply statistical ARMA and asymmetric GARCH-class models. A novel RBF network architecture is proposed based on incorporation of an error-correction mechanism, which improves forecasting ability of feed-forward neural networks. These proposed modelling approaches and SVM models are applied to predict the high-frequency time series of the BUX stock index. We found that it is possible to enhance forecast accuracy and achieve significant risk reduction in managerial decision making by applying intelligent forecasting models based on latest information technologies. On the other hand, we showed that statistical GARCH-class models can identify the presence of leverage effects, and react to the good and bad news.Web of Science421049

    Risk Management in the Arctic Offshore: Wicked Problems Require New Paradigms

    Get PDF
    Recent project-management literature and high-profile disasters—the financial crisis, the BP Deepwater Horizon oil spill, and the Fukushima nuclear accident—illustrate the flaws of traditional risk models for complex projects. This research examines how various groups with interests in the Arctic offshore define risks. The findings link the wicked problem framework and the emerging paradigm of Project Management of the Second Order (PM-2). Wicked problems are problems that are unstructured, complex, irregular, interactive, adaptive, and novel. The authors synthesize literature on the topic to offer strategies for navigating wicked problems, provide new variables to deconstruct traditional risk models, and integrate objective and subjective schools of risk analysis

    Exploiting Qualitative Information for Decision Support in Scenario Analysis

    Get PDF
    The development of scenario analysis (SA) to assist decision makers and stakeholders has been growing over the last few years through mainly exploiting qualitative information provided by experts. In this study, we present SA based on the use of qualitative data for strategy planning. We discuss the potential of SA as a decision-support tool, and provide a structured approach for the interpretation of SA data, and an empirical validation of expert evaluations that can help to measure the consistency of the analysis. An application to a specific case study is provided, with reference to the European organic farming business

    Forecasting Player Behavioral Data and Simulating in-Game Events

    Full text link
    Understanding player behavior is fundamental in game data science. Video games evolve as players interact with the game, so being able to foresee player experience would help to ensure a successful game development. In particular, game developers need to evaluate beforehand the impact of in-game events. Simulation optimization of these events is crucial to increase player engagement and maximize monetization. We present an experimental analysis of several methods to forecast game-related variables, with two main aims: to obtain accurate predictions of in-app purchases and playtime in an operational production environment, and to perform simulations of in-game events in order to maximize sales and playtime. Our ultimate purpose is to take a step towards the data-driven development of games. The results suggest that, even though the performance of traditional approaches such as ARIMA is still better, the outcomes of state-of-the-art techniques like deep learning are promising. Deep learning comes up as a well-suited general model that could be used to forecast a variety of time series with different dynamic behaviors

    Booms and Busts: New Keynesian and Behavioral Explanations

    Get PDF
    Capitalism is characterized by booms and busts. Periods of strong growth in output alternate with periods of declines in economic growth. Every macro-economic theory should attempt to explain these endemic business cycle movements. In this paper I present two paradigms that attempt to explain these booms and busts. One is the DSGE-paradigm in which agents have unlimited cognitive abilities. The other paradigm is a behavioural one in which agents are assumed to have limited cognitive abilities. These two types of models produce a radically different macroeconomic dynamics. I analyze these differences. I also study the different policy implications of these two paradigms.DSGE-model, imperfect information, heuristics, animal spirits
    corecore