5,144 research outputs found

    A sensitivity analysis of research institutions' productivity rankings to the time of citation observation

    Full text link
    One of the critical issues in bibliometric research assessments is the time required to achieve maturity in citations. Citation counts can be considered a reliable proxy of the real impact of a work only if they are observed after sufficient time has passed from publication date. In the present work the authors investigate the effect of varying the time of citation observation on accuracy of productivity rankings for research institutions. Research productivity measures are calculated for all Italian universities active in the hard sciences in the 2001-2003 period, by individual field and discipline, with the time of the citation observation varying from 2004 to 2008. The objective is to support policy-makers in choosing a citation window that optimizes the tradeoff between accuracy of rankings and timeliness of the exercise

    A sensitivity analysis of researchers' productivity rankings to the time of citation observation

    Full text link
    In this work we investigate the sensitivity of individual researchers' productivity rankings to the time of citation observation. The analysis is based on observation of research products for the 2001-2003 triennium for all research staff of Italian universities in the hard sciences, with the year of citation observation varying from 2004 to 2008. The 2008 rankings list is assumed the most accurate, as citations have had the longest time to accumulate and thus represent the best possible proxy of impact. By comparing the rankings lists from each year against the 2008 benchmark we provide policy-makers and research organization managers a measure of trade-off between timeliness of evaluation execution and accuracy of performance rankings. The results show that with variation in the evaluation citation window there are variable rates of inaccuracy across the disciplines of researchers. The inaccuracy results negligible for Physics, Biology and Medicine

    Bibliometric evaluation of research performance: where do we stand?

    Full text link
    This work provides a critical examination of the most popular bibliometric indicators and methodologies to assess the research performance of individuals and institutions. The aim is to raise the fog and make practitioners more aware of the inherent risks in do-it-myself practices, or cozy out-of-the-shelf solutions to the difficult question of how to evaluate research. The manuscript also proposes what we believe is the correct approach to bibliometric evaluation of research performance.Comment: arXiv admin note: substantial text overlap with arXiv:1810.1283

    What is the appropriate length of the publication period over which to assess research performance?

    Full text link
    National research assessment exercises are conducted in different nations over varying periods. The choice of the publication period to be observed has to address often contrasting needs: it has to ensure the reliability of the results issuing from the evaluation, but also reach the achievement of frequent assessments. In this work we attempt to identify which is the most appropriate or optimal publication period to be observed. For this, we analyze the variation of individual researchers' productivity rankings with the length of the publication period within the period 2003-2008, by the over 30,000 Italian university scientists in the hard sciences. First we analyze the variation in rankings referring to pairs of contiguous and overlapping publication periods, and show that the variations reduce markedly with periods above three years. Then we will show the strong randomness of performance rankings over publication periods under three years. We conclude that the choice of a three year publication period would seem reliable, particularly for physics, chemistry, biology and medicine

    National research assessment exercises: a comparison of peer review and bibliometrics rankings

    Full text link
    Development of bibliometric techniques has reached such a level as to suggest their integration or total substitution for classic peer review in the national research assessment exercises, as far as the hard sciences are concerned. In this work we compare rankings lists of universities captured by the first Italian evaluation exercise, through peer review, with the results of bibliometric simulations. The comparison shows the great differences between peer review and bibliometric rankings for excellence and productivity

    Ranking research institutions by the number of highly-cited articles per scientist

    Full text link
    In the literature and on the Web we can readily find research excellence rankings for organizations and countries by either total number of highly-cited articles (HCAs) or by ratio of HCAs to total publications. Neither are indicators of efficiency. In the current work we propose an indicator of efficiency, the number of HCAs per scientist, which can complement the productivity indicators based on impact of total output. We apply this indicator to measure excellence in the research of Italian universities as a whole, and in each field and discipline of the hard sciences

    A multivariate stochastic model to assess research performance

    Full text link
    There is a worldwide trend towards application of bibliometric research evaluation, in support of the needs of policy makers and research administrators. However the assumptions and limitations of bibliometric measurements suggest a probabilistic rather than the traditional deterministic approach to the assessment of research performance. The aim of this work is to propose a multivariate stochastic model for measuring the performance of individual scientists and to compare the results of its application with those arising from a deterministic approach. The dataset of the analysis covers the scientific production indexed in Web of Science for the 2006-2010 period, of over 900 Italian academic scientists working in two distinct fields of the life sciences

    How do you define and measure research productivity?

    Full text link
    Productivity is the quintessential indicator of efficiency in any production system. It seems it has become a norm in bibliometrics to define research productivity as the number of publications per researcher, distinguishing it from impact. In this work we operationalize the economic concept of productivity for the specific context of research activity and show the limits of the commonly accepted definition. We propose then a measurable form of research productivity through the indicator "Fractional Scientific Strength (FSS)", in keeping with the microeconomic theory of production. We present the methodology for measure of FSS at various levels of analysis: individual, field, discipline, department, institution, region and nation. Finally, we compare the ranking lists of Italian universities by the two definitions of research productivity

    The suitability of h and g indexes for measuring the research performance of institutions. Scientometrics, 97(3), 555-570

    Full text link
    It is becoming ever more common to use bibliometric indicators to evaluate the performance of research institutions, however there is often a failure to recognize the limits and drawbacks of such indicators. Since performance measurement is aimed at supporting critical decisions by research administrators and policy makers, it is essential to carry out empirical testing of the robustness of the indicators used. In this work we examine the accuracy of the popular "h" and "g" indexes for measuring university research performance by comparing the ranking lists derived from their application to the ranking list from a third indicator that better meets the requirements for robust and reliable assessment of institutional productivity. The test population is all Italian universities in the hard sciences, observed over the period 2001-2005. The analysis quantifies the correlations between the three university rankings (by discipline) and the shifts that occur under changing indicators, to measure the distortion inherent in use of the h and g indexes and their comparative accuracy for assessing institutions

    Evaluating research: from informed peer review to bibliometrics

    Full text link
    National research assessment exercises are becoming regular events in ever more countries. The present work contrasts the peer-review and bibliometrics approaches in the conduct of these exercises. The comparison is conducted in terms of the essential parameters of any measurement system: accuracy, robustness, validity, functionality, time and costs. Empirical evidence shows that for the natural and formal sciences, the bibliometric methodology is by far preferable to peer-review. Setting up national databases of publications by individual authors, derived from Web of Science or Scopus databases, would allow much better, cheaper and more frequent national research assessments
    • …
    corecore