10 research outputs found

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    On the value of second opinions: A credence goods field experiment

    No full text
    Credence goods markets with their asymmetric information between buyers and sellers are prone to large inefficiencies. In theory, poorly informed consumers can protect themselves from maltreatment through sellers by gathering second opinions from other sellers. Yet, field experimental evidence whether this is a successful strategy is scarce. Here we present a natural field experiment in the market for computer repairs and show that revealing a second opinion from another expert does neither increase the rate of successful repairs nor decrease the average repair price charged by sellers. (C) 2021 The Authors. Published by Elsevier B.V

    The Roots of Cooperation

    No full text
    We study the development of cooperation in 929 young children, aged 3 to 6. In a unified experimental framework, we examine pre-registered hypotheses about which of three fundamental pillars of human cooperation – direct and indirect reciprocity, and third-party punishment – emerges earliest as a means to increase cooperation in a repeated prisoner’s dilemma game. We find that third-party punishment doubles cooperation rates in comparison to a control condition. Children also reciprocate others’ behavior, yet direct and indirect reciprocity do not increase overall cooperation rates. We also examine the influence of children’s cognitive skills and parents’ socioeconomic background on cooperation

    Non-Standard Errors

    Full text link
    In statistics, samples are drawn from a population in a datagenerating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    Non-Standard Errors

    No full text
    URL des documents de travail : https://centredeconomiesorbonne.cnrs.fr/publications/Documents de travail du Centre d'Economie de la Sorbonne 2021.33 - ISSN : 1955-611XVoir aussi ce document de travail sur SSRN: https://ssrn.com/abstract=3981597In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants

    Non-standard errors

    No full text
    corecore