research

Overconfidence in Forecasts of Own Performance: An Experimental Study

Abstract

Overconfidence can have important economic consequences, but has received little direct testing within the discipline. We test for overconfidence in forecasts of own absolute or relative performance in two unfamiliar experimental tasks. Given their choice of effort at the tasks, participants have incentives to forecast accurately, and have opportunities for feedback, learning and revision. Forecast accuracy is evaluated at both the aggregate level, and at the individual level using realized outcomes. We find very limited evidence of overconfidence, with zero mean error or under-confidence more prevalent. Under-confidence is greatest in tasks with absolute rather than relative win criteria, often among subjects using greater or "smarter" effort.Overconfidence; forecast errors; self-assessment

    Similar works