8 research outputs found
Decision making for others: the case of loss aversion
Risky decisions are at the core of economic theory. While many of these decisions are taken on behalf of others rather than for oneself, the existing literature finds mixed results on whether people take more or less risk for others then for themselves. Recent studies suggest that taking decisions for others reduces loss aversion, thereby increasing risk taking on behalf of others. To test this, we elicit loss aversion in three treatments: making risky decisions for oneself, for one other subject, or for the decision maker and another person combined. We find a clear treatment effect when making decisions for others but not when making decisions for both
Correction: Protocol of the Healthy Brain Study:An accessible resource for understanding the human brain and how it dynamically and individually operates in its bio-social context
[This corrects the article DOI: 10.1371/journal.pone.0260952.]
Non-Standard Errors
In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants
Psychological price perception may exert a weaker effect on purchasing decisions than previously suggested: Results from a large online experiment fail to reproduce either a left-digit or perceptual-fluency effect
Non-Standard Errors
In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants
Non-Standard Errors
In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants