11 research outputs found

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    Essays on behavioral finance

    Get PDF
    Despite the fact that almost everyone faces risk in their lives and it is a crucial ingredient in economic models including asset pricing models, it is still an open debate how decision-makers or even investors evaluate risk. Experimental and empirical evidence shows that the standard expected utility theory falls short of explaining many economic and asset pricing phenomena. Behavioral finance provides alternative conceptual frameworks to explain these phenomena. This dissertation consists of 3 chapters investigating the impacts of some of the conceptual frameworks in behavioral finance. Chapter 1 investigates the potential impact of the expected utility theory with an aspiration level on stock returns. Chapter 2 investigates the impact of the law of small numbers on stock returns. Chapter 3 investigates the relation between time discounting and risk taking in an experiment

    The pernicious role of asymmetric history in negotiations

    No full text
    The role of history in negotiations is a double-edged sword. Although parties can develop trust over time, there are also countless examples of protracted feuds that developed as a result of conflicting interpretations and invocations of history. We propose that, due to biased invocations of the past, history is likely to play a pernicious role in negotiations – particularly when given an asymmetric history in which one party benefited at the expense of the other. We test this prediction in two, two-stage experiments. We find that asymmetric history in a first stage leads to increased impasses in a second stage, but that this effect holds only when the second stage pairs the same two parties who shared the asymmetric history in the first

    Nonstandard Errors

    No full text
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty—nonstandard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for more reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    Non-Standard Errors

    No full text
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants
    corecore