3 research outputs found
Recommended from our members
Modeling sampling duration in decisions from experience
Cognitive models of choice almost universally implicate se-quential evidence accumulation as a fundamental element ofthe mechanism by which preferences are formed. When to stop evidence accumulation is an important question that suchmodels do not currently try to answer. We present the first cog-nitive model that accurately predicts stopping decisions in in-dividual economic decisions-from-experience trials, using anonline learning model. Analysis of stopping decisions acrossthree different datasets reveals three useful predictors of sam-pling duration - relative evidence strength, how long it takesparticipants to see all rewards, and a novel indicator of con-vergence of an underlying learning process, which we call pre-dictive volatility. We quantify the relative strengths of thesefactors in predicting observers’ stopping points, finding thatpredictive volatility consistently dominates relative evidencestrength in stopping decisions
Repeated judgment sampling: Boundaries
This paper investigates the boundaries of the recent result that eliciting more than one estimate from the same person and averaging these can lead to accuracy gains in judgment tasks. It first examines its generality, analysing whether the kind of question being asked has an effect on the size of potential gains. Experimental results show that the question type matters. Previous results reporting potential accuracy gains are reproduced for year-estimation questions, and extended to questions about percentage shares. On the other hand, no gains are found for general numerical questions. The second part of the paper tests repeated judgment sampling's practical applicability by asking judges to provide a third and final answer on the basis of their first two estimates. In an experiment, the majority of judges do not consistently average their first two answers. As a result, they do not realise the potential accuracy gains from averaging
Repeated judgment sampling: Boundaries
This paper investigates the boundaries of the recent result that eliciting more than one estimate from the same person and averaging these can lead to accuracy gains in judgment tasks. It first examines its generality, analysing whether the kind of question being asked has an effect on the size of potential gains. Experimental results show that the question type matters. Previous results reporting potential accuracy gains are reproduced for year-estimation questions, and extended to questions about percentage shares. On the other hand, no gains are found for general numerical questions. The second part of the paper tests repeated judgment sampling's practical applicability by asking judges to provide a third and final answer on the basis of their first two estimates. In an experiment, the majority of judges do not consistently average their first two answers. As a result, they do not realise the potential accuracy gains from averaging.repeated judgments, judgment accuracy, averaging.