71 research outputs found

    Accounting for impact? How the impact factor is shaping research and what this means for knowledge production

    Get PDF
    Why does the impact factor continue to play such a consequential role in academia? Alex Rushforth and Sarah de Rijcke look at how considerations of the metric enter in from early stages of research planning to the later stages of publication. Even with initiatives against the use of impact factors, scientists themselves will likely err on the side of caution and continue to provide their scores on applications for funding and promotion

    Beyond replicability in the humanities

    Get PDF
    Merit, Expertise and Measuremen

    From indicators to indicating interdisciplinarity : a participatory mapping methodology for research communities in-the-making

    Get PDF
    This article discusses a project under development called “Inventing Indicators of Interdisciplinarity,” as an example of work in methodology development that combines quantitative methods with interpretative approaches in social and cultural research. Key to our project is the idea that Science and Technology Indicators do not only have representative value, enabling empirical insight into fields of research and innovation, but simultaneously have organizing capacity, as their deployment enables the curation of communities of interpretation. We begin with a discussion of concepts and methods for the analysis of interdisciplinarity in Science and Technology Studies (STS) and scientometrics, stressing that both fields recognize that interdisciplinarity is contested. To make possible a constructive exploration of interdisciplinarity as a contested—and transformative—phenomenon, we sketch out a methodological framework for the development and deployment of “engaging indicators.” We characterize this methodology of indicating as participatory, abductive, interactive, and informed by design, and emphasize that the method is inherently combinatory, as it brings together approaches from scientometrics, STS, and humanities research. In a final section, we test the potential of our approach in a pilot study of interdisciplinarity in AI, and offer reflections on digital mapping as a pathway towards indicating interdisciplinarity

    Imperfect, boring, headed for change? 10 ways to improve academic CV assessments

    Get PDF
    Academic CVs play a major role in research assessment and in shaping academic fields by sorting and selecting promising researchers. Their role in structuring and prioritizing information is therefore significant and has recently been criticised for facilitating judgements based predominantly on narrow quantitative measures. In the blogpost, Josh Brown, Wolfgang Kaltenbrunner, Michaela Strinzel, Sarah de Rijcke and Michael Hill assess the changing landscape of research CVs and give ten recommendations for how they can be used more effectively in research assessment

    Advancing to the next level: the quantified self and the gamification of academic research through social networks

    Get PDF
    Measurement of performance using digital tools is now commonplace, even in institutional activities such as academic research. The phenomenon of the “quantified self” is particularly evident in academic social networks. Björn Hammarfelt, Sarah de Rijcke, Alex Rushforth, Iris Wallenburg and Roland Bal argue that ResearchGate and similar services represent a “gamification” of research, drawing on features usually associated with online games, like rewards, rankings and levels. This carries obvious dangers, potentially promoting an understanding of the professional self as a product in competition with others. But quantification of the self in this way can also be seen as a way of taking control of one’s own (self)-evaluation. A similar pattern may be observed in healthcare and the rise of platforms carrying patient “experience” ratings and direct feedback on clinical performance

    Algorithmic Allocation: Untangling Rival Considerations of Fairness in Research Management

    Get PDF
    Marketization and quantification have become ingrained in academia over the past few decades. The trust in numbers and incentives has led to a proliferation of devices that individualize, induce, benchmark, and rank academic performance. As an instantiation of that trend, this article focuses on the establishment and contestation of ‘algorithmic allocation’ at a Dutch university medical centre. Algorithmic allocation is a form of data-driven automated reasoning that enables university administrators to calculate the overall research budget of a department without engaging in a detailed qualitative assessment of the current content and future potential of its research activities. It consists of a range of quantitative performance indicators covering scientific publications, peer recognition, PhD supervision, and grant acquisition. Drawing on semi-structured interviews, focus groups, and document analysis, we contrast the attempt to build a rationale for algorithmic allocation—citing unfair advantage, competitive achievement, incentives, and exchange—with the attempt to challenge that rationale based on existing epistemic differences between departments. From the specifics of the case, we extrapolate to considerations of epistemic and market fairness that might equally be at stake in other attempts to govern the production of scientific knowledge in a quantitative and market-oriented way

    The humanities do not need a replication drive

    Get PDF
    Argues that the humanities do not need a replication drive like that being pushed for in the sciences
    corecore