2 research outputs found

    Should the impact factor of the year of publication or the last available one be used when evaluating scientists?

    Get PDF
    Aim of study: A common procedure when evaluating scientists is considering the journal’s quartile of impact factors (within a category), many times considering the quartile in the year of publication instead of the last available ranking. We tested whether the extra work involved in considering the quartiles of each particular year is justifiedArea of study: EuropeMaterial and methods: we retrieved information from all papers published in 2008-2012 by researchers of AGROTECNIO, a centre focused in a range of agri-food subjects. Then, we validated the results observed for AGROTECNIO against five other European independent research centres: Technical University of Madrid (UPM) and the Universities of Nottingham (UK), Copenhagen (Denmark), Helsinki (Finland), and Bologna (Italy).Main results: The relationship between the actual impact of the papers and the impact factor quartile of a journal within its category was not clear, although for evaluations based on recently published papers there might not be much better indicators. We found unnecessary to determine the rank of the journal for the year of publication as the outcome of the evaluation using the last available rank was virtually the same.Research highlights: We confirmed that the journal quality reflects only vaguely the quality of the papers, and reported for the first time evidences that using the journal rank from the particular year that papers were published represents an unnecessary effort and therefore evaluation should be done simply considering the last available rank

    ICIS 2017 Panel Report: Break Your Shackles! Emancipating Information Systems from the Tyranny of Peer Review

    Get PDF
    The paper presents the report of a panel that debated the review process in the information systems (IS) discipline at ICIS 2017 in Seoul, Korea. The panel asked the fundamental question of whether we need to rethink the way we review papers in the discipline. The panelists partnered with the audience to explore some reviewing limitations in IS today and the ways that reviewing in the discipline might change to address some of its difficulties. We first report key concerns with modern reviewing. We then present arguments for and against three proposals (i.e., paying for reviews, mandatory reviews, and open reviews) and a panel audience vote on the issues. We neither advocate for nor condemn these solutions but rather use them to illustrate what we believe represent the core underlying issues with reviewing in the IS discipline. Specifically, we believe the key stumbling blocks to effectively improving our review process include 1) a lack of empirical data on actual practice, 2) a lack of clear goals, and 3) an ignorance of the possible solutions to the review dilemma that the wider literature articulates
    corecore