6 research outputs found

    The Effects of feedback on judgmental interval predictions

    Get PDF
    Cataloged from PDF version of article.The majority of studies of probability judgment have found that judgments tend to be overconfident and that the degree of overconfidence is greater the more difficult the task. Further, these effects have been resistant to attempts to ‘debias’ via feedback. We propose that under favourable conditions, provision of appropriate feedback should lead to significant improvements in calibration, and the current study aims to demonstrate this effect. To this end, participants first specified ranges within which the true values of time series would fall with a given probability. After receiving feedback, forecasters constructed intervals for new series, changing their probability values if desired. The series varied systematically in terms of their characteristics including amount of noise, presentation scale, and existence of trend. Results show that forecasts were initially overconfident but improved significantly after feedback. Further, this improvement was not simply due to ‘hedging’, i.e. shifting to very high probability estimates and extremely wide intervals; rather, it seems that calibration improvement was chiefly obtained by forecasters learning to evaluate the extent of the noise in the series. D 2003 International Institute of Forecasters. Published by Elsevier B.V. All rights reserved

    Currency forecasting: an investigation of extrapolative judgement

    Get PDF
    Cataloged from PDF version of article.This paper aims to explore the potential effects of trend type, noise and forecast horizon on experts' and novices' probabilistic forecasts. The subjects made forecasts over six time horizons from simulated monthly currency series based on a random walk, with zero, constant and stochastic drift, at two noise levels. The difference between the Mean Absolute Probability Score of each participant and an AR(1) model was used to evaluate performance. The results showed that the experts performed better than the novices, although worse than the model except in the case of zero drift series. No clear expertise effects occurred over horizons, albeit subjects' performance relative to the model improved as the horizon increased. Possible explanations are offered and some suggestions for future research are outlined

    Evaluating predictive performance of judgemental extrapolations from simulated currency series

    Get PDF
    Cataloged from PDF version of article.Judgemental forecasting of exchange rates is critical for ®nancial decision-making. Detailed investigations of the potential e ects of time-series characteristics on judgemental currency forecasts demand the use of simulated series where the form of the signal and probability distribution of noise are known. The accuracy measures Mean Absolute Error (MAE) and Mean Squared Error (MSE) are frequently applied quantities in assessing judgemental predictive performance on actual exchange rate data. This paper illustrates that, in applying these measures to simulated series with Normally distributed noise, it may be desirable to use their expected values after standardising the noise variance. A method of calculating the expected values for the MAE and MSE is set out, and an application to ®nancial experts' judgemental currency forecasts is presented. Ó 1999 Elsevier Science B.V. All rights reserved
    corecore