534,706 research outputs found

    Can We Test for Bias in Scientific Peer-Review?

    Get PDF
    Science rests upon the reliability of peer review. This paper suggests a way to test for bias. It is able to avoid the fallacy -- one seen in the popular press and the research literature -- that to measure discrimination it is sufficient to study averages within two populations. The paper’s contribution is primarily methodological, but I apply it, as an illustration, to data from the field of economics. No scientific bias or favoritism is found (although the Journal of Political Economy discriminates against its own Chicago authors). The test’s methodology is applicable in most scholarly disciplines

    Can We Test for Bias in Scientific Peer-Review?

    Get PDF
    Science rests upon the reliability of peer review. This paper suggests a way to test for bias. It is able to avoid the fallacy – one seen in the popular press and the research literature – that to measure discrimination it is sufficient to study averages within two populations. The paper’s contribution is primarily methodological, but I apply it, as an illustration, to data from the field of economics. No scientific bias or favoritism is found (although the Journal of Political Economy discriminates against its own Chicago authors). The test’s methodology is applicable in most scholarly disciplines.discrimination, citations, science, peer-review system

    Opening the Black-Box of Peer Review: An Agent-Based Model of Scientist Behaviour

    Get PDF
    This paper investigates the impact of referee behaviour on the quality and efficiency of peer review. We focused on the importance of reciprocity motives in ensuring cooperation between all involved parties. We modelled peer review as a process based on knowledge asymmetries and subject to evaluation bias. We built various simulation scenarios in which we tested different interaction conditions and author and referee behaviour. We found that reciprocity cannot always have per se a positive effect on the quality of peer review, as it may tend to increase evaluation bias. It can have a positive effect only when reciprocity motives are inspired by disinterested standards of fairness

    Automatically detecting open academic review praise and criticism

    Get PDF
    This is an accepted manuscript of an article published by Emerald in Online Information Review on 15 June 2020. The accepted version of the publication may differ from the final published version, accessible at https://doi.org/10.1108/OIR-11-2019-0347.Purpose: Peer reviewer evaluations of academic papers are known to be variable in content and overall judgements but are important academic publishing safeguards. This article introduces a sentiment analysis program, PeerJudge, to detect praise and criticism in peer evaluations. It is designed to support editorial management decisions and reviewers in the scholarly publishing process and for grant funding decision workflows. The initial version of PeerJudge is tailored for reviews from F1000Research’s open peer review publishing platform. Design/methodology/approach: PeerJudge uses a lexical sentiment analysis approach with a human-coded initial sentiment lexicon and machine learning adjustments and additions. It was built with an F1000Research development corpus and evaluated on a different F1000Research test corpus using reviewer ratings. Findings: PeerJudge can predict F1000Research judgements from negative evaluations in reviewers’ comments more accurately than baseline approaches, although not from positive reviewer comments, which seem to be largely unrelated to reviewer decisions. Within the F1000Research mode of post-publication peer review, the absence of any detected negative comments is a reliable indicator that an article will be ‘approved’, but the presence of moderately negative comments could lead to either an approved or approved with reservations decision. Originality/value: PeerJudge is the first transparent AI approach to peer review sentiment detection. It may be used to identify anomalous reviews with text potentially not matching judgements for individual checks or systematic bias assessments

    The Necessity of Commensuration Bias in Grant Peer Review

    Get PDF
    Peer reviewers at many funding agencies and scientific journals are asked to score submissions both on individual criteria and overall. The overall scores should be some kind of aggregate of the criteria scores. Carole Lee identifies this as a potential locus for bias to enter the peer review process, which she calls commensuration bias. Here I view the aggregation of scores through the lens of social choice theory. I argue that in many situations, especially when reviewing grant proposals, it is impossible to avoid commensuration bias

    The Necessity of Commensuration Bias in Grant Peer Review

    Get PDF
    Peer reviewers at many funding agencies and scientific journals are asked to score submissions both on individual criteria and overall. The overall scores should be some kind of aggregate of the criteria scores. Carole Lee identifies this as a potential locus for bias to enter the peer review process, which she calls commensuration bias. Here I view the aggregation of scores through the lens of social choice theory. I argue that in many situations, especially when reviewing grant proposals, it is impossible to avoid commensuration bias
    • …
    corecore