11 research outputs found

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    Do commissions and boni affect advisor behavior? : an online experiment using Amazon MTurk

    No full text
    Moderne Haushalte müssen immer mehr ihrer Spar- und Investmententscheidungen selbst treffen. Da die Finanzmärkte jedoch mittlerweile sehr hoch entwickelt sind, und Finanzprodukte immer komplexer werden, fehlt es den meisten Investoren an Wissen um dieser Aufgabe gerecht zu werden. Dementsprechend sind Investoren immer mehr von Experten abhängig, welche ihnen bei diesen Entscheidungen helfen sollen. Informationsasymmetrien in diesen Beziehungen, und die Tatsache, dass Finanzberater den größten Teil ihrer Einnahmen über Leistungsboni erhalten, können zu Ineffizienzen in diesen Märkten führen, falls sich die Art der Entlohnung eines Beraters auf dessen Empfehlung auswirkt. Diese Arbeit testet ob Berater für verschiedene Leistungsboni, sprich Kommissionen und Bonus-Zahlungen zugänglich sind. Weiteres wird untersucht welche der beiden Instrumente den größeren Einfluss auf die Beratungsqualität hat. Dafür wurde ein Online-Experiment auf der Plattform Amazon Mechanical Turk (MTurk) durchgeführt. Insgesamt nahmen 258 Personen an dem Experiment teil, von denen jedoch nur 150 die Kontrollfragen richtig beantworteten und somit zur weiteren Analyse zugelassen wurden. Die Ergebnisse, welche das Experiment liefern sind wenig überzeugend. Insgesamt kann jedoch davon ausgegangen werden, dass MTurk nicht das Mittel der Wahl ist um solche Experimente, welche eine Abfrage mittels Strategiemethode durchführen.Modern households have a great responsibility to deal with their own saving and investment decisions. Unfortunately, since the financial market has become more sophisticated and financial products more complex, investors usually lack the knowledge to meet these challenges. Therefore, they increasingly rely on experts help to make investment decisions. The information asymmetries, paired with the fact that financial advisors mostly rely on incentive pay as their main source of income, can lead to inefficiencies in those markets, if those incentive payments truly distort advisors' recommendations. In order to test, first, whether advisors are receptive to different forms of incentive pay, namely commissions and bonus-payments, and second, which of the two instruments is more effective in distorting advice, I conduct an online experiment on the platform Amazon Mechanical Turk (MTurk). In total 258 subjects participated in the experiment, while only 150 of them passed the control questions and were therefore admitted to the final analysis. The result of the experiment are very inconclusive. In general, the outcome of the experiment suggests using MTurk seems not to be a appropriate tool for these kind of experiments, using a strategy method, since participants do not pay sufficient attention to experimental instructions.Thomas RittmannsbergerUniversity of Innsbruck, Masterarbeit, 2019(VLID)346156

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    Non-Standard Errors

    Full text link
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants
    corecore