334 research outputs found

    What is the appropriate length of the publication period over which to assess research performance?

    Full text link
    National research assessment exercises are conducted in different nations over varying periods. The choice of the publication period to be observed has to address often contrasting needs: it has to ensure the reliability of the results issuing from the evaluation, but also reach the achievement of frequent assessments. In this work we attempt to identify which is the most appropriate or optimal publication period to be observed. For this, we analyze the variation of individual researchers' productivity rankings with the length of the publication period within the period 2003-2008, by the over 30,000 Italian university scientists in the hard sciences. First we analyze the variation in rankings referring to pairs of contiguous and overlapping publication periods, and show that the variations reduce markedly with periods above three years. Then we will show the strong randomness of performance rankings over publication periods under three years. We conclude that the choice of a three year publication period would seem reliable, particularly for physics, chemistry, biology and medicine

    The dispersion of research performance within and between universities as a potential indicator of the competitive intensity in higher education systems

    Full text link
    Higher education systems in competitive environments generally present top universities, that are able to attract top scientists, top students and public and private financing, with notable socio-economic benefits in their region. The same does not hold true for non-competitive systems. In this study we will measure the dispersion of research performance within and between universities in the Italian university system, typically non-competitive. We will also investigate the level of correlation that occurs between performance in research and its dispersion in universities. The findings may represent a first benchmark for similar studies in other nations. Furthermore, they lead to policy indications, questioning the effectiveness of selective funding of universities based on national research assessment exercises. The field of observation is composed of all Italian universities active in the hard sciences. Research performance will be evaluated using a bibliometric approach, through publications indexed in the Web of Science between 2004 and 2008

    Should the research performance of scientists be distinguished by gender?

    Full text link
    The literature on gender differences in research performance seems to suggest a gap between men and women, where the former outperform the latter. Whether one agrees with the different factors proposed to explain the phenomenon, it is worthwhile to verify if comparing the performance within each gender, rather than without distinction, gives significantly different ranking lists. If there were some structural factor that determined a penalty in performance of female researchers compared to their male peers, then under conditions of equal capacities of men and women, any comparative evaluations of individual performance that fail to account for gender differences would lead to distortion of the judgments in favor of men. In this work we measure the extent of differences in rank between the two methods of comparing performance in each field of the hard sciences: for professors in the Italian university system, we compare the distributions of research performance for men and women and subsequently the ranking lists with and without distinction by gender. The results are of interest for the optimization of efficient selection in formulation of recruitment, career advancement and incentive schemes

    A sensitivity analysis of researchers' productivity rankings to the time of citation observation

    Full text link
    In this work we investigate the sensitivity of individual researchers' productivity rankings to the time of citation observation. The analysis is based on observation of research products for the 2001-2003 triennium for all research staff of Italian universities in the hard sciences, with the year of citation observation varying from 2004 to 2008. The 2008 rankings list is assumed the most accurate, as citations have had the longest time to accumulate and thus represent the best possible proxy of impact. By comparing the rankings lists from each year against the 2008 benchmark we provide policy-makers and research organization managers a measure of trade-off between timeliness of evaluation execution and accuracy of performance rankings. The results show that with variation in the evaluation citation window there are variable rates of inaccuracy across the disciplines of researchers. The inaccuracy results negligible for Physics, Biology and Medicine

    Revisiting the scaling of citations for research assessment

    Full text link
    Over the past decade, national research evaluation exercises, traditionally conducted using the peer review method, have begun opening to bibliometric indicators. The citations received by a publication are assumed as proxy for its quality, but they require standardization prior to use in comparative evaluation of organizations or individual scientists: the citation data must be standardized, due to the varying citation behavior across research fields. The objective of this paper is to compare the effectiveness of the different methods of normalizing citations, in order to provide useful indications to research assessment practitioners. Simulating a typical national research assessment exercise, he analysis is conducted for all subject categories in the hard sciences and is based on the Thomson Reuters Science Citation Index-Expanded. Comparisons show that the citations average is the most effective scaling parameter, when the average is based only on the publications actually cited

    The impact of unproductive and top researchers on overall university research performance

    Full text link
    Unlike competitive higher education systems, non-competitive systems show relatively uniform distributions of top professors and low performers among universities. In this study, we examine the impact of unproductive and top faculty members on overall research performance of the university they belong to. Furthermore, we analyze the potential relationship between research productivity of a university and the indexes of concentration of unproductive and top professors. Research performance is evaluated using a bibliometric approach, through publications indexed on the Web of Science between 2004 and 2008. The set analyzed consists of all Italian universities active in the hard sciences.Comment: arXiv admin note: substantial text overlap with arXiv:1810.13234, arXiv:1810.13233, arXiv:arXiv:1810.13231, arXiv:1810.13281, arXiv:1810.1220

    A sensitivity analysis of research institutions' productivity rankings to the time of citation observation

    Full text link
    One of the critical issues in bibliometric research assessments is the time required to achieve maturity in citations. Citation counts can be considered a reliable proxy of the real impact of a work only if they are observed after sufficient time has passed from publication date. In the present work the authors investigate the effect of varying the time of citation observation on accuracy of productivity rankings for research institutions. Research productivity measures are calculated for all Italian universities active in the hard sciences in the 2001-2003 period, by individual field and discipline, with the time of the citation observation varying from 2004 to 2008. The objective is to support policy-makers in choosing a citation window that optimizes the tradeoff between accuracy of rankings and timeliness of the exercise

    How important is choice of the scaling factor in standardizing citations?

    Full text link
    Because of the variations in citation behavior across research fields, appropriate standardization must be applied as part of any bibliometric analysis of the productivity of individual scientists and research organizations. Such standardization involves scaling by some factor that characterizes the distribution of the citations of articles from the same year and subject category. In this work we conduct an analysis of the sensitivity of researchers' productivity rankings to the scaling factor chosen to standardize their citations. To do this we first prepare the productivity rankings for all researchers (more than 30,000) operating in the hard sciences in Italy, over the period 2004-2008. We then measure the shifts in rankings caused by adopting scaling factors other than the particular factor that seems more effective for comparing the impact of publications in different fields: the citation average of the distribution of cited-only publications

    National peer-review research assessment exercises for the hard sciences can be a complete waste of money: the Italian case

    Full text link
    There has been ample demonstration that bibliometrics is superior to peer-review for national research assessment exercises in the hard sciences. In this paper we examine the Italian case, taking the 2001-2003 university performance rankings list based on bibliometrics as benchmark. We compare the accuracy of the first national evaluation exercise, conducted entirely by peer-review, to other rankings lists prepared at zero cost, based on indicators indirectly linked to performance or available on the Internet. The results show that, for the hard sciences, the costs of conducting the Italian evaluation of research institutions could have been completely avoided

    The dangers of performance-based research funding in non-competitive higher education systems

    Full text link
    An increasing number of nations allocate public funds to research institutions on the basis of rankings obtained from national evaluation exercises. Therefore, in non-competitive higher education systems where top scientists are dispersed among all the universities, rather than concentrated among a few, there is a high risk of penalizing those top scientists who work in lower-performance universities. Using a five-year bibliometric analysis conducted on all Italian universities active in the hard sciences from 2004-2008, this work analyzes the distribution of publications and relevant citations by scientists within the universities, measures the research performance of individual scientists, quantifies the intensity of concentration of top scientists at each university, provides performance rankings for the universities, and indicates the effects of selective funding on the top scientists of low-ranked universities
    • …
    corecore