493 research outputs found

    What drives the relevance and quality of experts' adjustment to model-based forecasts?

    Get PDF
    Experts frequently adjust statistical model-based forecasts. Sometimes this leads to higher forecast accuracy, but expert forecasts can also be dramatically worse. We explore the potential drivers of the relevance and quality of experts' added knowledge. For that purpose, we examine a very large database covering monthly forecasts for pharmaceutical products in seven categories concerning thirty-five countries. The extensive results lead to two main outcomes which are (1) that more balance between model and expert leads to more relevance of the added value of the expert and(2) that smaller-sized adjustments lead to higher quality, although sometimes very large adjustments can be beneficial too. In general, too much input of the expert leads to a deterioration of the quality of the final forecast.expert forecasts;judgemental adjustment

    Do experts' SKU forecasts improve after feedback?

    Get PDF
    We analyze the behavior of experts who quote forecasts for monthlySKU-level sales data where we compare data before and after the momentthat experts received different kinds of feedback on their behavior. Wehave data for 21 experts located in as many countries who make SKUlevelforecasts for a variety of pharmaceutical products for October 2006to September 2007. We study the behavior of the experts by comparingtheir forecasts with those from an automated statistical program, and wereport the forecast accuracy over these 12 months. In September 2007these experts were given feedback on their behavior and they received atraining at the headquartersÂ’ office, where specific attention was given tothe ins and outs of the statistical program. Next, we study the behaviorof the experts for the 3 months after the training session, that is, October2007 to December 2007. Our main conclusion is that in the second periodthe expertsÂ’ forecasts deviated less from the statistical forecasts and thattheir accuracy improved substantially.expert forecasts;model forecasts;cognitive process feedback;judgmental adjustment;outcome feedback;performance feedback;task properties feedback

    Expert opinion versus expertise in forecasting

    Get PDF
    Expert opinion is an opinion given by an expert, and it can have significant value in forecasting key policy variables in economics and finance. Expert forecasts can either be expert opinions, or forecasts based on an econometric model. An expert forecast that is based on an econometric model is replicable, and can be defined as a replicable expert forecast (REF), whereas an expert opinion that is not based on an econometric model can be defined as a non-replicable expert forecast (Non-REF). Both replicable and non-replicable expert forecasts may be made available by an expert regarding a policy variable of interest. In this paper we develop a model to generate replicable expert forecasts, and compare REF with Non-REF. A method is presented to compare REF and Non-REF using efficient estimation methods, and a direct test of expertise on expert opinion is given. The latter serves the purpose of investigating whether expert adjustment improves the model-based forecasts. Illustrations for forecasting pharmaceutical SKUs, where the econometric model is of (variations of) the ARIMA type, show the relevance of the new methodology proposed in the paper. In particular, experts possess significant expertise, and expert forecasts are significant in explaining actual sales.forecasts;efficient estimation;generated regressors;direct test;expert opinion;non-replicable expert forecast;replicable expert

    Do experts incorporate statistical model forecasts and should they?

    Get PDF
    Experts can rely on statistical model forecasts when creating their own forecasts.Usually it is not known what experts actually do. In this paper we focus on threequestions, which we try to answer given the availability of expert forecasts andmodel forecasts. First, is the expert forecast related to the model forecast andhow? Second, how is this potential relation influenced by other factors? Third,how does this relation influence forecast accuracy?We propose a new and innovative two-level Hierarchical Bayes model to answerthese questions. We apply our proposed methodology to a large data set offorecasts and realizations of SKU-level sales data from a pharmaceutical company.We find that expert forecasts can depend on model forecasts in a variety ofways. Average sales levels, sales volatility, and the forecast horizon influence thisdependence. We also demonstrate that theoretical implications of expert behavioron forecast accuracy are reflected in the empirical data.endogeneity;Bayesian analysis;expert forecasts;model forecasts;forecast adjustment

    Evaluating Econometric Models and Expert Intuition

    Get PDF
    This thesis is about forecasting situations which involve econometric models and expert intuition. The first three chapters are about what it is that experts do when they adjust statistical model forecasts and what might improve that adjustment behavior. It is investigated how expert forecasts are related to model forecasts, how this potential relation is influenced by other factors and how it influences forecast accuracy, how feedback influences forecasting behavior and accuracy and which loss function is associated with experts’ forecasts. The final chapter focuses on how to make use in an optimal way of multiple forecasts produced by multiple experts for one and the same event. It is found that potential disagreement amongst forecasters can have predictive value, especially when used in Markov regime-switching models

    Responsiveness Is an Important Quality of Mothers

    Get PDF
    Mothers who consistently respond to their children’s emotions, through mirroring, have smarter, more socially competent children.York's Knowledge Mobilization Unit provides services and funding for faculty, graduate students, and community organizations seeking to maximize the impact of academic research and expertise on public policy, social programming, and professional practice. It is supported by SSHRC and CIHR grants, and by the Office of the Vice-President Research & Innovation. [email protected] www.researchimpact.c

    Evaluating Macroeconomic Forecasts: A Review of Some Recent Developments

    Get PDF
    Macroeconomic forecasts are frequently produced, widely published, intensively discussed and comprehensively used. The formal evaluation of such forecasts has a long research history. Recently, a new angle to the evaluation of forecasts has been addressed, and in this review we analyse some recent developments from that perspective. The literature on forecast evaluation predominantly assumes that macroeconomic forecasts are generated from econometric models. In practice, however, most macroeconomic forecasts, such as those from the IMF, World Bank, OECD, Federal Reserve Board, Federal Open Market Committee (FOMC) and the ECB, are typically based on econometric model forecasts jointly with human intuition. This seemingly inevitable combination renders most of these forecasts biased and, as such, their evaluation becomes non-standard. In this review, we consider the evaluation of two forecasts in which: (i) the two forecasts are generated from two distinct econometric models; (ii) one forecast is generated from an econometric model and the other is obtained as a combination of a model and intuition; and (iii) the two forecasts are generated from two distinct (but unknown) combinations of different models and intuition. It is shown that alternative tools are needed to compare and evaluate the forecasts in each of these three situations. These alternative techniques are illustrated by comparing the forecasts from the (econometric) Staff of the Federal Reserve Board and the FOMC on inflation, unemployment and real GDP growth. It is shown that the FOMC does not forecast significantly better than the Staff, and that the intuition of the FOMC does not add significantly in forecasting the actual values of the economic fundamentals. This would seem to belie the purported expertise of the FOMC.Macroeconomic forecasts, econometric models, human intuition, biased forecasts, forecast performance, forecast evaluation, forecast comparison.

    "Evaluating Macroeconomic Forecasts: A Review of Some Recent Developments"

    Get PDF
    Macroeconomic forecasts are frequently produced, published, discussed and used. The formal evaluation of such forecasts has a long research history. Recently, a new angle to the evaluation of forecasts has been addressed, and in this review we analyse some recent developments from that perspective. The literature on forecast evaluation predominantly assumes that macroeconomic forecasts are generated from econometric models. In practice, however, most macroeconomic forecasts, such as those from the IMF, World Bank, OECD, Federal Reserve Board, Federal Open Market Committee (FOMC) and the ECB, are based on econometric model forecasts as well as on human intuition. This seemingly inevitable combination renders most of these forecasts biased and, as such, their evaluation becomes non-standard. In this review, we consider the evaluation of two forecasts in which: (i) the two forecasts are generated from two distinct econometric models; (ii) one forecast is generated from an econometric model and the other is obtained as a combination of a model, the other forecast, and intuition; and (iii) the two forecasts are generated from two distinct combinations of different models and intuition. It is shown that alternative tools are needed to compare and evaluate the forecasts in each of these three situations. These alternative techniques are illustrated by comparing the forecasts from the Federal Reserve Board and the FOMC on inflation, unemployment and real GDP growth.

    Evaluating Macroeconomic Forecasts: A Review of Some Recent Developments

    Get PDF
    Macroeconomic forecasts are frequently produced, published, discussed and used. The formal evaluation of such forecasts has a long research history. Recently, a new angle to the evaluation of forecasts has been addressed, and in this review we analyse some recent developments from that perspective. The literature on forecast evaluation predominantly assumes that macroeconomic forecasts are generated from econometric models. In practice, however, most macroeconomic forecasts, such as those from the IMF, World Bank, OECD, Federal Reserve Board, Federal Open Market Committee (FOMC) and the ECB, are based on econometric model forecasts as well as on human intuition. This seemingly inevitable combination renders most of these forecasts biased and, as such, their evaluation becomes non-standard. In this review, we consider the evaluation of two forecasts in which: (i) the two forecasts are generated from two distinct econometric models; (ii) one forecast is generated from an econometric model and the other is obtained as a combination of a model, the other forecast, and intuition; and (iii) the two forecasts are generated from two distinct combinations of different models and intuition. It is shown that alternative tools are needed to compare and evaluate the forecasts in each of these three situations. These alternative techniques are illustrated by comparing the forecasts from the Federal Reserve Board and the FOMC on inflation, unemployment and real GDP growth.Macroeconomic forecasts; econometric models; human intuition; biased forecasts; forecast performance; forecast evaluation; forecast comparison

    "Does the FOMC Have Expertise, and Can It Forecast?"

    Get PDF
    The primary purpose of the paper is to answer the following two questions regarding the performance of the influential Federal Open Market Committee (FOMC) of the Federal Reserve System, in comparison with the forecasts contained in the "Greenbooks" of the professional staff of the Board of Governors: Does the FOMC have expertise, and can it forecast better than the staff? The FOMC forecasts that are analyzed in practice are nonreplicable forecasts. In order to evaluate such forecasts, this paper develops a model to generate replicable FOMC forecasts, and compares the staff forecasts, non-replicable FOMC forecasts, and replicable FOMC forecasts, considers optimal forecasts and efficient estimation methods, and presents a direct test of FOMC expertise on nonreplicable FOMC forecasts. The empirical analysis of Romer and Romer (2008) is reexamined to evaluate whether their criticisms of the FOMC's forecasting performance should be accepted unreservedly, or might be open to alternative interpretations.
    • …
    corecore