16 research outputs found

    On tit for tat: Franceschini and Maisano versus ANVUR regarding the Italian research assessment exercise VQR 2011-2014

    Full text link
    The response by Benedetto, Checchi, Graziosi & Malgarini (2017) (hereafter "BCG&M"), past and current members of the Italian Agency for Evaluation of University and Research Systems (ANVUR), to Franceschini and Maisano's ("F&M") article (2017), inevitably draws us into the debate. BCG&M in fact complain "that almost all criticisms to the evaluation procedures adopted in the two Italian research assessments VQR 2004-2010 and 2011-2014 limit themselves to criticize the procedures without proposing anything new and more apt to the scope". Since it is us who raised most criticisms in the literature, we welcome this opportunity to retrace our vainly "constructive" recommendations, made with the hope of contributing to assessments of the Italian research system more in line with the state of the art in scientometrics. We see it as equally interesting to confront the problem of the failure of knowledge transfer from R&D (scholars) to engineering and production (ANVUR's practitioners) in the Italian VQRs. We will provide a few notes to help the reader understand the context for this failure. We hope that these, together with our more specific comments, will also assist in communicating the reasons for the level of scientometric competence expressed in BCG&M's heated response to F&M's criticism

    A rejoinder to the comments of Benedetto et al. on the paper “Critical remarks on the Italian research assessment exercise VQR 2011–2014” (Journal of Informetrics, 11(2): 337–357)

    Get PDF
    The paper “Critical remarks on the Italian research assessment exercise VQR 2011–2014” (Franceschini & Maisano, 2017) analyzed some vulnerabilities of the recently concluded Italian assessment exercise. Some apical (former and current)members of ANVUR promptly commented on our criticisms through a letter to the editor (Benedetto, Checchi, Graziosi, & Malgarini, 2017). We believe that this letter is not very convincing. In the following, we provide a rejoinder to the comments directed to our paper

    In which fields are citations indicators of research quality?

    Get PDF
    Citation counts are widely used as indicators of research quality to support or replace human peer review and for lists of top cited papers, researchers, and institutions. Nevertheless, the extent to which citation counts reflect research quality is not well understood. We report the largest-scale evaluation of the relationship between research quality and citation counts, correlating them for 87,739 journal articles in 34 field-based Units of Assessment (UoAs) from the UK. We show that the two correlate positively in all academic fields examined, from very weak (0.1) to strong (0.5). The highest correlations are in health, life sciences and physical sciences and the lowest are in the arts and humanities. The patterns are similar for the field classification schemes of Scopus and Dimensions.ai. We also show that there is no citation threshold in any field beyond which all articles are excellent quality, so lists of top cited articles are not definitive collections of excellence. Moreover, log transformed citation counts have a close to linear relationship with UK research quality ranked scores that is shallow in some fields but steep in others. In conclusion, whilst appropriately field normalised citations associate positively with research quality in all fields, they never perfectly reflect it, even at very high values

    Predicting long-term publication impact through a combination of early citations and journal impact factor

    Full text link
    The ability to predict the long-term impact of a scientific article soon after its publication is of great value towards accurate assessment of research performance. In this work we test the hypothesis that good predictions of long-term citation counts can be obtained through a combination of a publication's early citations and the impact factor of the hosting journal. The test is performed on a corpus of 123,128 WoS publications authored by Italian scientists, using linear regression models. The average accuracy of the prediction is good for citation time windows above two years, decreases for lowly-cited publications, and varies across disciplines. As expected, the role of the impact factor in the combination becomes negligible after only two years from publication

    Are Italian research assessment exercises size-biased?

    Get PDF
    Research assessment exercises have enjoyed ever-increasing popularity in many countries in recent years, both as a method to guide public funds allocation and as a validation tool for adopted research support policies. Italy’s most recently completed evaluation effort (VQR 2011–14) required each university to submit to the Ministry for Education, University, and Research (MIUR) 2 research products per author (3 in the case of other research institutions), chosen in such a way that the same product is not assigned to two authors belonging to the same institution. This constraint suggests that larger institutions, where collaborations among colleagues may be more frequent, could suffer a size-related bias in their evaluation scores. To validate our claim, we investigate the outcome of artificially splitting Sapienza University of Rome, one of the largest universities in Europe, in a number of separate partitions, according to several criteria, noting significant score increases for several partitioning scenarios

    A categorization of arguments for counting methods for publication and citation indicators

    Get PDF
    Most publication and citation indicators are based on datasets with multi-authored publications and thus a change in counting method will often change the value of an indicator. Therefore it is important to know why a specific counting method has been applied. I have identified arguments for counting methods in a sample of 32 bibliometric studies published in 2016 and compared the result with discussions of arguments for counting methods in three older studies. Based on the underlying logics of the arguments I have arranged the arguments in four groups. Group 1 focuses on arguments related to what an indicator measures, Group 2 on the additivity of a counting method, Group 3 on pragmatic reasons for the choice of counting method, and Group 4 on an indicator's influence on the research community or how it is perceived by researchers. This categorization can be used to describe and discuss how bibliometric studies with publication and citation indicators argue for counting methods
    corecore