91 research outputs found
Accounting for impact? How the impact factor is shaping research and what this means for knowledge production
Why does the impact factor continue to play such a consequential role in academia? Alex Rushforth and Sarah de Rijcke look at how considerations of the metric enter in from early stages of research planning to the later stages of publication. Even with initiatives against the use of impact factors, scientists themselves will likely err on the side of caution and continue to provide their scores on applications for funding and promotion
Beyond replicability in the humanities
Merit, Expertise and Measuremen
From indicators to indicating interdisciplinarity : a participatory mapping methodology for research communities in-the-making
This article discusses a project under development called âInventing Indicators of Interdisciplinarity,â as an example of work in methodology development that combines quantitative methods with interpretative approaches in social and cultural research. Key to our project is the idea that Science and Technology Indicators do not only have representative value, enabling empirical insight into fields of research and innovation, but simultaneously have organizing capacity, as their deployment enables the curation of communities of interpretation. We begin with a discussion of concepts and methods for the analysis of interdisciplinarity in Science and Technology Studies (STS) and scientometrics, stressing that both fields recognize that interdisciplinarity is contested. To make possible a constructive exploration of interdisciplinarity as a contestedâand transformativeâphenomenon, we sketch out a methodological framework for the development and deployment of âengaging indicators.â We characterize this methodology of indicating as participatory, abductive, interactive, and informed by design, and emphasize that the method is inherently combinatory, as it brings together approaches from scientometrics, STS, and humanities research. In a final section, we test the potential of our approach in a pilot study of interdisciplinarity in AI, and offer reflections on digital mapping as a pathway towards indicating interdisciplinarity
Imperfect, boring, headed for change? 10 ways to improve academic CV assessments
Academic CVs play a major role in research assessment and in shaping academic fields by sorting and selecting promising researchers. Their role in structuring and prioritizing information is therefore significant and has recently been criticised for facilitating judgements based predominantly on narrow quantitative measures. In the blogpost, Josh Brown, Wolfgang Kaltenbrunner, Michaela Strinzel, Sarah de Rijcke and Michael Hill assess the changing landscape of research CVs and give ten recommendations for how they can be used more effectively in research assessment
Advancing to the next level: the quantified self and the gamification of academic research through social networks
Measurement of performance using digital tools is now commonplace, even in institutional activities such as academic research. The phenomenon of the âquantified selfâ is particularly evident in academic social networks. Björn Hammarfelt, Sarah de Rijcke, Alex Rushforth, Iris Wallenburg and Roland Bal argue that ResearchGate and similar services represent a âgamificationâ of research, drawing on features usually associated with online games, like rewards, rankings and levels. This carries obvious dangers, potentially promoting an understanding of the professional self as a product in competition with others. But quantification of the self in this way can also be seen as a way of taking control of oneâs own (self)-evaluation. A similar pattern may be observed in healthcare and the rise of platforms carrying patient âexperienceâ ratings and direct feedback on clinical performance
Algorithmic Allocation: Untangling Rival Considerations of Fairness in Research Management
Marketization and quantification have become ingrained in academia over the past few decades. The trust in numbers and incentives has led to a proliferation of devices that individualize, induce, benchmark, and rank academic performance. As an instantiation of that trend, this article focuses on the establishment and contestation of âalgorithmic allocationâ at a Dutch university medical centre. Algorithmic allocation is a form of data-driven automated reasoning that enables university administrators to calculate the overall research budget of a department without engaging in a detailed qualitative assessment of the current content and future potential of its research activities. It consists of a range of quantitative performance indicators covering scientific publications, peer recognition, PhD supervision, and grant acquisition. Drawing on semi-structured interviews, focus groups, and document analysis, we contrast the attempt to build a rationale for algorithmic allocationâciting unfair advantage, competitive achievement, incentives, and exchangeâwith the attempt to challenge that rationale based on existing epistemic differences between departments. From the specifics of the case, we extrapolate to considerations of epistemic and market fairness that might equally be at stake in other attempts to govern the production of scientific knowledge in a quantitative and market-oriented way
The humanities do not need a replication drive
Argues that the humanities do not need a replication drive like that being pushed for in the sciences
Recommended from our members
Making sense of science under conditions of complexity and uncertainty
Science advice to todayâs policymakers has become more prominent than ever, due primarily to the growing human impact on our world, and the everincreasing complexity of the knowledge needed for coping with economic, social and environmental challenges. These include demographic changes, global trade issues, international market structures, transboundary pollution, digitalisation, urbanisation and many other factors of modern life. Many such policy problems are characterised by a mixture of complexity, uncertainty and ambiguity. The issues for which scientific input is most needed by policymakers are the ones for which the science is most often complex, multidisciplinary and incomplete.
Scientific expertise supports effective policymaking by providing the best available knowledge, which can then be used to understand a specific problem, generate and evaluate policy options, and provide meaning to the discussion around critical topics within society. Scientific knowledge is crucial to ensuring that systematic evidence is part of the collective decisionmaking process. Systematic knowledge is instrumental to understanding phenomena, providing insights that help to understand and tackle societyâs problems. Science therefore represents an essential element in Europeâs future development of policy.
The nature of science advice is wide-ranging. The science advisory ecosystem includes a broad set of players, from individual academics to national academies, universities, think tanks and many others. Their roles include knowledge generation, synthesis, brokering, policy evaluation, horizon scanning and more.
In the vast majority of policy cases, scientific advice is only one of many inputs but it occupies a unique position, as summarised below and in the report
Accounting for Impact? The Journal Impact Factor and the Making of Biomedical Research in the Netherlands
The range and types of performance metrics has recently proliferated in academic settings, with bibliometric indicators being particularly visible examples. One field that has traditionally been hospitable towards such indicators is biomedicine. Here the relative merits of bibliometrics are widely discussed, with debates often portraying them as heroes or villains. Despite a plethora of controversies, one of the most widely used indicators in this field is said to be the Journal Impact Factor (JIF). In this article we argue that much of the current debates around researchersâ uses of the JIF in biomedicine can be classed as âfolk theoriesâ: explanatory accounts told among a community that seldom (if ever) get systematically checked. Such accounts rarely disclose how knowledge production itself becomes more-or-less consolidated around the JIF. Using ethnographic materials from different research sites in Dutch University Medical Centers, this article sheds new empirical and theoretical light on how performance metrics variously shape biomedical research on the âshop floor.â Our detailed analysis underscores a need for further research into the constitutive effects of evaluative metrics
- âŠ