84 research outputs found

    Evaluating Econometric Models and Expert Intuition

    Get PDF
    This thesis is about forecasting situations which involve econometric models and expert intuition. The first three chapters are about what it is that experts do when they adjust statistical model forecasts and what might improve that adjustment behavior. It is investigated how expert forecasts are related to model forecasts, how this potential relation is influenced by other factors and how it influences forecast accuracy, how feedback influences forecasting behavior and accuracy and which loss function is associated with experts’ forecasts. The final chapter focuses on how to make use in an optimal way of multiple forecasts produced by multiple experts for one and the same event. It is found that potential disagreement amongst forecasters can have predictive value, especially when used in Markov regime-switching models

    Evaluating Macroeconomic Forecasts: A Review of Some Recent Developments

    Get PDF
    Macroeconomic forecasts are frequently produced, widely published, intensively discussed and comprehensively used. The formal evaluation of such forecasts has a long research history. Recently, a new angle to the evaluation of forecasts has been addressed, and in this review we analyse some recent developments from that perspective. The literature on forecast evaluation predominantly assumes that macroeconomic forecasts are generated from econometric models. In practice, however, most macroeconomic forecasts, such as those from the IMF, World Bank, OECD, Federal Reserve Board, Federal Open Market Committee (FOMC) and the ECB, are typically based on econometric model forecasts jointly with human intuition. This seemingly inevitable combination renders most of these forecasts biased and, as such, their evaluation becomes non-standard. In this review, we consider the evaluation of two forecasts in which: (i) the two forecasts are generated from two distinct econometric models; (ii) one forecast is generated from an econometric model and the other is obtained as a combination of a model and intuition; and (iii) the two forecasts are generated from two distinct (but unknown) combinations of different models and intuition. It is shown that alternative tools are needed to compare and evaluate the forecasts in each of these three situations. These alternative techniques are illustrated by comparing the forecasts from the (econometric) Staff of the Federal Reserve Board and the FOMC on inflation, unemployment and real GDP growth. It is shown that the FOMC does not forecast significantly better than the Staff, and that the intuition of the FOMC does not add significantly in forecasting the actual values of the economic fundamentals. This would seem to belie the purported expertise of the FOMC.Macroeconomic forecasts, econometric models, human intuition, biased forecasts, forecast performance, forecast evaluation, forecast comparison.

    Evaluating Macroeconomic Forecasts: A Review of Some Recent Developments

    Get PDF
    Macroeconomic forecasts are frequently produced, published, discussed and used. The formal evaluation of such forecasts has a long research history. Recently, a new angle to the evaluation of forecasts has been addressed, and in this review we analyse some recent developments from that perspective. The literature on forecast evaluation predominantly assumes that macroeconomic forecasts are generated from econometric models. In practice, however, most macroeconomic forecasts, such as those from the IMF, World Bank, OECD, Federal Reserve Board, Federal Open Market Committee (FOMC) and the ECB, are based on econometric model forecasts as well as on human intuition. This seemingly inevitable combination renders most of these forecasts biased and, as such, their evaluation becomes non-standard. In this review, we consider the evaluation of two forecasts in which: (i) the two forecasts are generated from two distinct econometric models; (ii) one forecast is generated from an econometric model and the other is obtained as a combination of a model, the other forecast, and intuition; and (iii) the two forecasts are generated from two distinct combinations of different models and intuition. It is shown that alternative tools are needed to compare and evaluate the forecasts in each of these three situations. These alternative techniques are illustrated by comparing the forecasts from the Federal Reserve Board and the FOMC on inflation, unemployment and real GDP growth.Macroeconomic forecasts; econometric models; human intuition; biased forecasts; forecast performance; forecast evaluation; forecast comparison

    "Evaluating Macroeconomic Forecasts: A Review of Some Recent Developments"

    Get PDF
    Macroeconomic forecasts are frequently produced, published, discussed and used. The formal evaluation of such forecasts has a long research history. Recently, a new angle to the evaluation of forecasts has been addressed, and in this review we analyse some recent developments from that perspective. The literature on forecast evaluation predominantly assumes that macroeconomic forecasts are generated from econometric models. In practice, however, most macroeconomic forecasts, such as those from the IMF, World Bank, OECD, Federal Reserve Board, Federal Open Market Committee (FOMC) and the ECB, are based on econometric model forecasts as well as on human intuition. This seemingly inevitable combination renders most of these forecasts biased and, as such, their evaluation becomes non-standard. In this review, we consider the evaluation of two forecasts in which: (i) the two forecasts are generated from two distinct econometric models; (ii) one forecast is generated from an econometric model and the other is obtained as a combination of a model, the other forecast, and intuition; and (iii) the two forecasts are generated from two distinct combinations of different models and intuition. It is shown that alternative tools are needed to compare and evaluate the forecasts in each of these three situations. These alternative techniques are illustrated by comparing the forecasts from the Federal Reserve Board and the FOMC on inflation, unemployment and real GDP growth.

    "Does the FOMC Have Expertise, and Can It Forecast?"

    Get PDF
    The primary purpose of the paper is to answer the following two questions regarding the performance of the influential Federal Open Market Committee (FOMC) of the Federal Reserve System, in comparison with the forecasts contained in the "Greenbooks" of the professional staff of the Board of Governors: Does the FOMC have expertise, and can it forecast better than the staff? The FOMC forecasts that are analyzed in practice are nonreplicable forecasts. In order to evaluate such forecasts, this paper develops a model to generate replicable FOMC forecasts, and compares the staff forecasts, non-replicable FOMC forecasts, and replicable FOMC forecasts, considers optimal forecasts and efficient estimation methods, and presents a direct test of FOMC expertise on nonreplicable FOMC forecasts. The empirical analysis of Romer and Romer (2008) is reexamined to evaluate whether their criticisms of the FOMC's forecasting performance should be accepted unreservedly, or might be open to alternative interpretations.

    Do experts' SKU forecasts improve after feedback?

    Get PDF
    We analyze the behavior of experts who quote forecasts for monthly SKU-level sales data where we compare data before and after the moment that experts received different kinds of feedback on their behavior. We have data for 21 experts located in as many countries who make SKUlevel forecasts for a variety of pharmaceutical products for October 2006 to September 2007. We study the behavior of the experts by comparing their forecasts with those from an automated statistical program, and we report the forecast accuracy over these 12 months. In September 2007 these experts were given feedback on their behavior and they received a training at the headquarters' office, where specific attention was giv

    What drives the relevance and quality of experts' adjustment to model-based forecasts?

    Get PDF
    Experts frequently adjust statistical model-based forecasts. Sometimes this leads to higher forecast accuracy, but expert forecasts can also be dramatically worse. We explore the potential drivers of the relevance and quality of experts' added knowledge. For that purpose, we examine a very large databas

    A Manager's Perspective on Combining Expert and Model-based Forecasts

    Get PDF
    We study the performance of sales forecasts which linearly combine model-based forecasts and expert forecasts. Using a unique and very large database containing monthly model-based forecasts for many pharmaceutical products and forecasts given by thirty-seven different experts, we document that a combination almost always is most accurate. When correlating the specific weights in these "best" linear combinations with experts' experience and behaviour, we find that more experience is beneficial for forecasts for nearby horizons. And, when the rate of bracketing increases the relative weights converge to a 50%-50% distribution, when there is some slight variation across forecasts horizons

    Combining SKU-level sales forecasts from models and experts

    Get PDF
    We study the performance of SKU-level sales forecasts which linearly combine statistical model forecasts and expert forecasts. Using a large and unique database containing model forecasts for monthly sales of various pharmaceutical products and forecasts given by about fifty experts, we document that a linear combination of those forecasts usually is most accurate. Corre

    Do Experts incorporate Statistical Model Forecasts and should they?

    Get PDF
    Experts can rely on statistical model forecasts when creating their own forecasts. Usually it is not known what experts actually do. In this paper we focus on three questions, which we try to answer given the availability of expert forecasts and model forecasts. First, is the expert forecast related to the model forecast and how? Second, how is this potential relation influenced by other factors? Third, how does this relation influence forecast accuracy? We propose a new and innovative two-level Hierarchical Bayes model to answer these questions. We apply our proposed methodology to a large data set of forecasts and realizations of SKU-level sales data from a pharmaceutical company. We find that expert forecasts can depend on model forecasts in a variety of ways. Average sales levels, sales volatility, and the forecast horizon influence this dependence. We also demonstrate that theoretical implications of expert behavior on forecast accuracy are reflected in the empirical data
    • …
    corecore