94,324 research outputs found

    Publication counting methods for a national research evaluation exercise

    Get PDF
    This work was supported by the DIALOG Program (Grant name “Research into Excellence Patterns in Science and Art”) financed by the Ministry of Science and Higher Education in Poland.In this paper, we investigate the effects of using four methods of publication counting (complete, whole, fractional, square root fractional) and limiting the number of publications (at researcher and institution levels) on the results of a national research evaluation exercise across fields using Polish data. We use bibliographic information on 0.58 million publications from the 2013–2016 period. Our analysis reveals that the largest effects are in those fields within which a variety publication and cooperation patterns can be observed (e.g. in Physical sciences or History and archeology). We argue that selecting the publication counting method for national evaluation purposes needs to take into account the current situation in the given country in terms of the excellence of research outcomes, level of internal, external and international collaboration, and publication patterns in the various fields of sciences. Our findings show that the social sciences and humanities are not significantly influenced by the different publication counting methods and limiting the number of publications included in the evaluation, as publication patterns in these fields are quite different from those observed in the so-called hard sciences. When discussing the goals of any national research evaluation system, we should be aware that the ways of achieving these goals are closely related to the publication counting method, which can serve as incentives for certain publication practices

    On tit for tat: Franceschini and Maisano versus ANVUR regarding the Italian research assessment exercise VQR 2011-2014

    Full text link
    The response by Benedetto, Checchi, Graziosi & Malgarini (2017) (hereafter "BCG&M"), past and current members of the Italian Agency for Evaluation of University and Research Systems (ANVUR), to Franceschini and Maisano's ("F&M") article (2017), inevitably draws us into the debate. BCG&M in fact complain "that almost all criticisms to the evaluation procedures adopted in the two Italian research assessments VQR 2004-2010 and 2011-2014 limit themselves to criticize the procedures without proposing anything new and more apt to the scope". Since it is us who raised most criticisms in the literature, we welcome this opportunity to retrace our vainly "constructive" recommendations, made with the hope of contributing to assessments of the Italian research system more in line with the state of the art in scientometrics. We see it as equally interesting to confront the problem of the failure of knowledge transfer from R&D (scholars) to engineering and production (ANVUR's practitioners) in the Italian VQRs. We will provide a few notes to help the reader understand the context for this failure. We hope that these, together with our more specific comments, will also assist in communicating the reasons for the level of scientometric competence expressed in BCG&M's heated response to F&M's criticism

    A categorization of arguments for counting methods for publication and citation indicators

    Get PDF
    Most publication and citation indicators are based on datasets with multi-authored publications and thus a change in counting method will often change the value of an indicator. Therefore it is important to know why a specific counting method has been applied. I have identified arguments for counting methods in a sample of 32 bibliometric studies published in 2016 and compared the result with discussions of arguments for counting methods in three older studies. Based on the underlying logics of the arguments I have arranged the arguments in four groups. Group 1 focuses on arguments related to what an indicator measures, Group 2 on the additivity of a counting method, Group 3 on pragmatic reasons for the choice of counting method, and Group 4 on an indicator's influence on the research community or how it is perceived by researchers. This categorization can be used to describe and discuss how bibliometric studies with publication and citation indicators argue for counting methods

    Evaluating a Department’s Research: Testing the Leiden Methodology in Business and Management

    Get PDF
    The Leiden methodology (LM), also sometimes called the “crown indicator”, is a quantitative method for evaluating the research quality of a research group or academic department based on the citations received by the group in comparison to averages for the field. There have been a number of applications but these have mainly been in the hard sciences where the data on citations, provided by the ISI Web of Science (WoS), is more reliable. In the social sciences, including business and management, many journals and books are not included within WoS and so the LM has not been tested here. In this research study the LM has been applied on a dataset of over 3000 research publications from three UK business schools. The results show that the LM does indeed discriminate between the schools, and has a degree of concordance with other forms of evaluation, but that there are significant limitations and problems within this discipline
    • …
    corecore