35 research outputs found

    National-scale research performance assessment at the individual level

    Full text link
    There is an evident and rapid trend towards the adoption of evaluation exercises for national research systems for purposes, among others, of improving allocative efficiency in public funding of individual institutions. However the desired macroeconomic aims could be compromised if internal redistribution of government resources within each research institution does not follow a consistent logic: the intended effects of national evaluation systems can result only if a "funds for quality" rule is followed at all levels of decision-making. The objective of this study is to propose a bibliometric methodology for: i) large-scale comparative evaluation of research performance by individual scientists, research groups and departments within research institution, to inform selective funding allocations, and ii) assessment of strengths and weaknesses by field of research, to inform strategic planning and control. The proposed methodology has been applied to the hard science disciplines of the Italian university research system for the period 2004-2006

    Efficiency of research performance and the glass researcher

    Full text link
    Abramo and D'Angelo (in press) doubt the validity of established size-independent indicators measuring citation impact and plead in favor of measuring scientific efficiency (by using the Fractional Scientific Strength indicator). This note is intended to comment on some questionable and a few favorable approaches in the paper by Abramo and D'Angelo (in press).Comment: Accepted for publication in the Journal of Informetric

    A farewell to the MNCS and like size-independent indicators

    Full text link
    The arguments presented demonstrate that the Mean Normalized Citation Score (MNCS) and other size-independent indicators based on the ratio to publications are not indicators of research performance. The article provides examples of the distortions when rankings by MNCS are compared to those based on indicators of productivity. The authors propose recommendations for the scientometric community to switch to ranking by research efficiency, instead of MNCS and other size-independent indicators

    National research assessment exercises: the effects of changing the rules of the game during the game

    Full text link
    National research evaluation exercises provide a comparative measure of research performance of the nation's institutions, and as such represent a tool for stimulating research productivity, particularly if the results are used to inform selective funding by government. While a school of thought welcomes frequent changes in evaluation criteria in order to prevent the subjects evaluated from adopting opportunistic behaviors, it is evident that the "rules of the game" should above all be functional towards policy objectives, and therefore be known with adequate forewarning prior to the evaluation period. Otherwise, the risk is that policy-makers will find themselves faced by a dilemma: should they reward universities that responded best to the criteria in effect at the outset of the observation period or those that result as best according to rules that emerged during or after the observation period? This study verifies if and to what extent some universities are penalized instead of rewarded for good behavior, in pursuit of the objectives of the "known" rules of the game, by comparing the research performances of Italian universities for the period of the nation's next evaluation exercise (2004-2008): first as measured according to criteria available at the outset of the period and next according to those announced at the end of the period

    What is the appropriate length of the publication period over which to assess research performance?

    Full text link
    National research assessment exercises are conducted in different nations over varying periods. The choice of the publication period to be observed has to address often contrasting needs: it has to ensure the reliability of the results issuing from the evaluation, but also reach the achievement of frequent assessments. In this work we attempt to identify which is the most appropriate or optimal publication period to be observed. For this, we analyze the variation of individual researchers' productivity rankings with the length of the publication period within the period 2003-2008, by the over 30,000 Italian university scientists in the hard sciences. First we analyze the variation in rankings referring to pairs of contiguous and overlapping publication periods, and show that the variations reduce markedly with periods above three years. Then we will show the strong randomness of performance rankings over publication periods under three years. We conclude that the choice of a three year publication period would seem reliable, particularly for physics, chemistry, biology and medicine

    The impact of unproductive and top researchers on overall university research performance

    Full text link
    Unlike competitive higher education systems, non-competitive systems show relatively uniform distributions of top professors and low performers among universities. In this study, we examine the impact of unproductive and top faculty members on overall research performance of the university they belong to. Furthermore, we analyze the potential relationship between research productivity of a university and the indexes of concentration of unproductive and top professors. Research performance is evaluated using a bibliometric approach, through publications indexed on the Web of Science between 2004 and 2008. The set analyzed consists of all Italian universities active in the hard sciences.Comment: arXiv admin note: substantial text overlap with arXiv:1810.13234, arXiv:1810.13233, arXiv:arXiv:1810.13231, arXiv:1810.13281, arXiv:1810.1220

    Research productivity: are higher academic ranks more productive than lower ones?

    Full text link
    This work analyses the links between individual research performance and academic rank. A typical bibliometric methodology is used to study the performance of all Italian university researchers active in the hard sciences, for the period 2004-2008. The objective is to characterize the performance of the ranks of full, associate and assistant professors, along various dimensions, in order to verify the existence of performance differences among the ranks in general and for single disciplines

    The dangers of performance-based research funding in non-competitive higher education systems

    Full text link
    An increasing number of nations allocate public funds to research institutions on the basis of rankings obtained from national evaluation exercises. Therefore, in non-competitive higher education systems where top scientists are dispersed among all the universities, rather than concentrated among a few, there is a high risk of penalizing those top scientists who work in lower-performance universities. Using a five-year bibliometric analysis conducted on all Italian universities active in the hard sciences from 2004-2008, this work analyzes the distribution of publications and relevant citations by scientists within the universities, measures the research performance of individual scientists, quantifies the intensity of concentration of top scientists at each university, provides performance rankings for the universities, and indicates the effects of selective funding on the top scientists of low-ranked universities

    Relatives in the same university faculty: nepotism or merit?

    Full text link
    In many countries culture, practice or regulations inhibit the co-presence of relatives within the university faculty. We test the legitimacy of such attitudes and provisions, investigating the phenomenon of nepotism in Italy, a nation with high rates of favoritism. We compare the individual research performance of "children" who have "parents" in the same university against that of the "non-children" with the same academic rank and seniority, in the same field. The results show non-significant differences in performance. Analyses of career advancement show that children's research performance is on average superior to that of their colleagues who did not advance. The study's findings do not rule out the existence of nepotism, which has been actually recorded in a low percentage of cases, but do not prove either the most serious presumed consequences of nepotism, namely that relatives who are poor performers are getting ahead of non-relatives who are better performers. In light of these results, many attitudes and norms concerning parental ties in academia should be reconsidered.Comment: arXiv admin note: text overlap with arXiv:1810.12207, arXiv:1810.1323

    A sensitivity analysis of researchers' productivity rankings to the time of citation observation

    Full text link
    In this work we investigate the sensitivity of individual researchers' productivity rankings to the time of citation observation. The analysis is based on observation of research products for the 2001-2003 triennium for all research staff of Italian universities in the hard sciences, with the year of citation observation varying from 2004 to 2008. The 2008 rankings list is assumed the most accurate, as citations have had the longest time to accumulate and thus represent the best possible proxy of impact. By comparing the rankings lists from each year against the 2008 benchmark we provide policy-makers and research organization managers a measure of trade-off between timeliness of evaluation execution and accuracy of performance rankings. The results show that with variation in the evaluation citation window there are variable rates of inaccuracy across the disciplines of researchers. The inaccuracy results negligible for Physics, Biology and Medicine
    corecore