89 research outputs found

    Accounting for impact? How the impact factor is shaping research and what this means for knowledge production

    Get PDF
    Why does the impact factor continue to play such a consequential role in academia? Alex Rushforth and Sarah de Rijcke look at how considerations of the metric enter in from early stages of research planning to the later stages of publication. Even with initiatives against the use of impact factors, scientists themselves will likely err on the side of caution and continue to provide their scores on applications for funding and promotion

    Beyond replicability in the humanities

    Get PDF
    Merit, Expertise and Measuremen

    From indicators to indicating interdisciplinarity : a participatory mapping methodology for research communities in-the-making

    Get PDF
    This article discusses a project under development called “Inventing Indicators of Interdisciplinarity,” as an example of work in methodology development that combines quantitative methods with interpretative approaches in social and cultural research. Key to our project is the idea that Science and Technology Indicators do not only have representative value, enabling empirical insight into fields of research and innovation, but simultaneously have organizing capacity, as their deployment enables the curation of communities of interpretation. We begin with a discussion of concepts and methods for the analysis of interdisciplinarity in Science and Technology Studies (STS) and scientometrics, stressing that both fields recognize that interdisciplinarity is contested. To make possible a constructive exploration of interdisciplinarity as a contested—and transformative—phenomenon, we sketch out a methodological framework for the development and deployment of “engaging indicators.” We characterize this methodology of indicating as participatory, abductive, interactive, and informed by design, and emphasize that the method is inherently combinatory, as it brings together approaches from scientometrics, STS, and humanities research. In a final section, we test the potential of our approach in a pilot study of interdisciplinarity in AI, and offer reflections on digital mapping as a pathway towards indicating interdisciplinarity

    Imperfect, boring, headed for change? 10 ways to improve academic CV assessments

    Get PDF
    Academic CVs play a major role in research assessment and in shaping academic fields by sorting and selecting promising researchers. Their role in structuring and prioritizing information is therefore significant and has recently been criticised for facilitating judgements based predominantly on narrow quantitative measures. In the blogpost, Josh Brown, Wolfgang Kaltenbrunner, Michaela Strinzel, Sarah de Rijcke and Michael Hill assess the changing landscape of research CVs and give ten recommendations for how they can be used more effectively in research assessment

    Algorithmic Allocation: Untangling Rival Considerations of Fairness in Research Management

    Get PDF
    Marketization and quantification have become ingrained in academia over the past few decades. The trust in numbers and incentives has led to a proliferation of devices that individualize, induce, benchmark, and rank academic performance. As an instantiation of that trend, this article focuses on the establishment and contestation of ‘algorithmic allocation’ at a Dutch university medical centre. Algorithmic allocation is a form of data-driven automated reasoning that enables university administrators to calculate the overall research budget of a department without engaging in a detailed qualitative assessment of the current content and future potential of its research activities. It consists of a range of quantitative performance indicators covering scientific publications, peer recognition, PhD supervision, and grant acquisition. Drawing on semi-structured interviews, focus groups, and document analysis, we contrast the attempt to build a rationale for algorithmic allocation—citing unfair advantage, competitive achievement, incentives, and exchange—with the attempt to challenge that rationale based on existing epistemic differences between departments. From the specifics of the case, we extrapolate to considerations of epistemic and market fairness that might equally be at stake in other attempts to govern the production of scientific knowledge in a quantitative and market-oriented way

    Advancing to the next level: the quantified self and the gamification of academic research through social networks

    Get PDF
    Measurement of performance using digital tools is now commonplace, even in institutional activities such as academic research. The phenomenon of the “quantified self” is particularly evident in academic social networks. Björn Hammarfelt, Sarah de Rijcke, Alex Rushforth, Iris Wallenburg and Roland Bal argue that ResearchGate and similar services represent a “gamification” of research, drawing on features usually associated with online games, like rewards, rankings and levels. This carries obvious dangers, potentially promoting an understanding of the professional self as a product in competition with others. But quantification of the self in this way can also be seen as a way of taking control of one’s own (self)-evaluation. A similar pattern may be observed in healthcare and the rise of platforms carrying patient “experience” ratings and direct feedback on clinical performance

    The humanities do not need a replication drive

    Get PDF
    Argues that the humanities do not need a replication drive like that being pushed for in the sciences

    Accounting for Impact? The Journal Impact Factor and the Making of Biomedical Research in the Netherlands

    Get PDF
    The range and types of performance metrics has recently proliferated in academic settings, with bibliometric indicators being particularly visible examples. One field that has traditionally been hospitable towards such indicators is biomedicine. Here the relative merits of bibliometrics are widely discussed, with debates often portraying them as heroes or villains. Despite a plethora of controversies, one of the most widely used indicators in this field is said to be the Journal Impact Factor (JIF). In this article we argue that much of the current debates around researchers’ uses of the JIF in biomedicine can be classed as ‘folk theories’: explanatory accounts told among a community that seldom (if ever) get systematically checked. Such accounts rarely disclose how knowledge production itself becomes more-or-less consolidated around the JIF. Using ethnographic materials from different research sites in Dutch University Medical Centers, this article sheds new empirical and theoretical light on how performance metrics variously shape biomedical research on the ‘shop floor.’ Our detailed analysis underscores a need for further research into the constitutive effects of evaluative metrics
    • 

    corecore