14 research outputs found

    Strategic risk and response time across games

    Get PDF
    Experimental data for two types of bargaining games are used to study the role of strategic risk in the decision making process that takes place when subjects play a game only once. The bargaining games are the Ultimatum Game (UG) and the Yes-or-No Game (YNG). Strategic risk in a game stems from the effect on one player’s payoff of the behavior of other players. In the UG this risk is high, while it is nearly absent in the YNG. In studying the decision making process of subjects we use the time elapsed before a choice is made (response time) as a proxy for amount of thought or introspection. We find that response times are on average larger in the UG than in the YNG, indicating a positive correlation between strategic risk and introspection. In both games the behavior of subjects with large response times is more dispersed than that of subjects with small response times. In the UG larger response time is associated with less generous and thus riskier behavior, while it is associated to more generous behavior in the YNG

    Promoting Intellectual Discovery: Patents Versus Markets

    Full text link

    An Experiment on Prediction Markets in Science

    Get PDF
    Prediction markets are powerful forecasting tools. They have the potential to aggregate private information, to generate and disseminate a consensus among the market participants, and to provide incentives for information acquisition. These market functionalities can be very valuable for scientific research. Here, we report an experiment that examines the compatibility of prediction markets with the current practice of scientific publication. We investigated three settings. In the first setting, different pieces of information were disclosed to the public during the experiment. In the second setting, participants received private information. In the third setting, each piece of information was private at first, but was subsequently disclosed to the public. An automated, subsidizing market maker provided additional incentives for trading and mitigated liquidity problems. We find that the third setting combines the advantages of the first and second settings. Market performance was as good as in the setting with public information, and better than in the setting with private information. In contrast to the first setting, participants could benefit from information advantages. Thus the publication of information does not detract from the functionality of prediction markets. We conclude that for integrating prediction markets into the practice of scientific research it is of advantage to use subsidizing market makers, and to keep markets aligned with current publication practice

    Non-Standard Errors

    Get PDF
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: Non-standard errors (NSEs). We study NSEs by letting 164 teams test the same hypotheses on the same data. NSEs turn out to be sizable, but smaller for better reproducible or higher rated research. Adding peer-review stages reduces NSEs. We further find that this type of uncertainty is underestimated by participants

    Finanzas Experimentales

    No full text
    The chapters reviews the main contributions to the study of asset prices in competitive markets using experiments with human subjects. The equilibrium notions that are studied are presented to the student and then discussed using the specific experimental setup. The notions considered are a simple risk-neutral NPV, Arrow-Debreu equilibrium, Radner equilibrium and the special case of CAPM, and the rational expectations equilibrium. Each is used to motivate experiments with dynamic one-asset markets, static multiple-asset markets, and markets with private information, respectively

    Non-Standard Errors

    No full text
    In statistics, samples are drawn from a population in a data-generating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidence-generating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants
    corecore