69 research outputs found

    How Much Evidence Should One Collect?

    Get PDF
    This paper focuses on the question how much evidence one should collect before deciding on the truth-value of a proposition. An analysis is given of a model where evidence takes the form of Bernoulli-distributed random variables. From a Bayesian perspective, the optimal strategy depends on the potential loss of drawing the wrong conclusion about the proposition and the cost of collecting evidence. It turns out to be best to collect only small amounts of evidence unless the potential loss is very large relative to the cost of collecting evidence

    The Incentive to Share in the Intermediate Results Game

    Get PDF
    I discuss a game-theoretic model in which scientists compete to finish the intermediate stages of some research project. Banerjee et al. have previously shown that if the credit awarded for intermediate results is proportional to their difficulty, then the strategy profile in which scientists share each intermediate stage as soon as they complete it is a Nash equilibrium. I show that the equilibrium is both unique and strict. Thus rational credit-maximizing scientists have an incentive to share their intermediate results, as long as this is sufficiently rewarded

    The Incentive to Share in the Intermediate Results Game

    Get PDF
    I discuss a game-theoretic model in which scientists compete to finish the intermediate stages of some research project. Banerjee et al. have previously shown that if the credit awarded for intermediate results is proportional to their difficulty, then the strategy profile in which scientists share each intermediate stage as soon as they complete it is a Nash equilibrium. I show that the equilibrium is both unique and strict. Thus rational credit-maximizing scientists have an incentive to share their intermediate results, as long as this is sufficiently rewarded

    When Journal Editors Play Favorites

    Get PDF
    Should editors of scientific journals practice triple-blind reviewing? I consider two arguments in favor of this claim. The first says that insofar as editors' decisions are affected by information they would not have had under triple-blind review, an injustice is committed against certain authors. I show that even well-meaning editors would commit this wrong and I endorse this argument. The second argument says that insofar as editors' decisions are affected by information they would not have had under triple-blind review, it will negatively affect the quality of published papers. I distinguish between two kinds of biases that an editor might have. I show that one of them has a positive effect on quality and the other a negative one, and that the combined effect could be either positive or negative. Thus I do not endorse the second argument in general. However, I do endorse this argument for certain fields, for which I argue that the positive effect does not apply

    When Journal Editors Play Favorites

    Get PDF
    Should editors of scientific journals practice triple-blind reviewing? I consider two arguments in favor of this claim. The first says that insofar as editors' decisions are affected by information they would not have had under triple-blind review, an injustice is committed against certain authors. I show that even well-meaning editors would commit this wrong and I endorse this argument. The second argument says that insofar as editors' decisions are affected by information they would not have had under triple-blind review, it will negatively affect the quality of published papers. I distinguish between two kinds of biases that an editor might have. I show that one of them has a positive effect on quality and the other a negative one, and that the combined effect could be either positive or negative. Thus I do not endorse the second argument in general. However, I do endorse this argument for certain fields, for which I argue that the positive effect does not apply

    The Incentive to Share in the Intermediate Results Game

    Get PDF
    I discuss a game-theoretic model in which scientists compete to finish the intermediate stages of some research project. Banerjee et al. have previously shown that if the credit awarded for intermediate results is proportional to their difficulty, then the strategy profile in which scientists share each intermediate stage as soon as they complete it is a Nash equilibrium. I show that the equilibrium is both unique and strict. Thus rational credit-maximizing scientists have an incentive to share their intermediate results, as long as this is sufficiently rewarded

    Expediting the Flow of Knowledge Versus Rushing into Print

    Get PDF
    Recent empirical work has shown that many scientific results may not be reproducible. By itself, this does not entail that there is a problem (or a "reproducibility crisis"). However, I argue that there is a problem: the reward structure of science incentivizes scientists to focus on speed and impact at the expense of the reproducibility of their work. I illustrate this using a well-known failure of reproducibility: Fleischmann and Pons' work on cold fusion. I then use a rational choice model to identify a set of sufficient conditions for this problem to arise, and I argue that these conditions plausibly apply to a wide range of research situations. In the conclusion I consider possible solutions and implications for how Fleischmann and Pons' work should be evaluated

    How Much Evidence Should One Collect?

    Get PDF
    This paper focuses on the question how much evidence one should collect before deciding on the truth-value of a proposition. An analysis is given of a model where evidence takes the form of Bernoulli-distributed random variables. From a Bayesian perspective, the optimal strategy depends on the potential loss of drawing the wrong conclusion about the proposition and the cost of collecting evidence. It turns out to be best to collect only small amounts of evidence unless the potential loss is very large relative to the cost of collecting evidence
    corecore