758 research outputs found

    The measurement of italian universities' research productivity by a non parametric-bibliometric methodology

    Full text link
    This paper presents a methodology for measuring the technical efficiency of research activities. It is based on the application of data envelopment analysis to bibliometric data on the Italian university system. For that purpose, different input values (research personnel by level and extra funding) and output values (quantity, quality and level of contribution to actual scientific publications) are considered. Our study aims at overcoming some of the limitations connected to the methodologies that have so far been proposed in the literature, in particular by surveying the scientific production of universities by authors' name

    Assessing technical and cost efficiency of research activities: A case study of the Italian university system

    Full text link
    This paper employs data envelopment analysis (DEA) to assess both technical and cost efficiency of research activities of the Italian university system. Differently from both peer review and the top-down discipline-invariant bibliographic approaches used elsewhere, a bottom-up bibliometric methodology is applied. Publications are assigned first to authors and then to one of nine scientific and technical university disciplinary areas. Inputs are specified in terms of the numbers of full, associate and assistant professors and outputs as the number of publications, contributions to publications and their scientific impact as variously measured across the disciplines included. DEA is undertaken cross-sectionally using the averages of these inputs and outputs over the period 2001-2003. The results typically show much variation in the rankings of the disciplinary areas within and across universities, depending on the efficiency indicator employed

    Revisiting size effects in higher education research productivity

    Full text link
    The potential occurrence of variable returns to size in research activity is a factor to be considered in choices about the size of research organizations and also in the planning of national research assessment exercises, so as to avoid favoring those organizations that would benefit from such occurrence. The aim of the current work is to improve on weaknesses in past inquiries concerning returns to size through application of a research productivity measurement methodology that is more accurate and robust. The method involves field-standardized measurements that are free of the typical distortions of aggregate measurement by discipline or organization. The analysis is conducted for 183 hard science fields in all 77 Italian universities (time period 2004-2008) and allows detection of potential differences by field

    How do you define and measure research productivity?

    Full text link
    Productivity is the quintessential indicator of efficiency in any production system. It seems it has become a norm in bibliometrics to define research productivity as the number of publications per researcher, distinguishing it from impact. In this work we operationalize the economic concept of productivity for the specific context of research activity and show the limits of the commonly accepted definition. We propose then a measurable form of research productivity through the indicator "Fractional Scientific Strength (FSS)", in keeping with the microeconomic theory of production. We present the methodology for measure of FSS at various levels of analysis: individual, field, discipline, department, institution, region and nation. Finally, we compare the ranking lists of Italian universities by the two definitions of research productivity

    Variability of research performance across disciplines within universities in non-competitive higher education systems

    Full text link
    Many nations are adopting higher education strategies that emphasize the development of elite universities able to compete at the international level in the attraction of skills and resources. Elite universities pursue excellence in all their disciplines and fields of action. The impression is that this does not occur in "non-competitive" education systems, and that instead, within single universities excellent disciplines will coexist with mediocre ones. To test this, the authors measure research productivity in the hard sciences for all Italian universities over the period 2004-2008 at the levels of the institution, their individual disciplines and fields within them. The results show that the distribution of excellent disciplines is not concentrated in a few universities: top universities show disciplines and fields that are often mediocre, while generally mediocre universities will often include top disciplines

    A heuristic approach to author name disambiguation in bibliometrics databases for large-scale research assessments

    Full text link
    National exercises for the evaluation of research activity by universities are becoming regular practice in ever more countries. These exercises have mainly been conducted through the application of peer-review methods. Bibliometrics has not been able to offer a valid large-scale alternative because of almost overwhelming difficulties in identifying the true author of each publication. We will address this problem by presenting a heuristic approach to author name disambiguation in bibliometric datasets for large-scale research assessments. The application proposed concerns the Italian university system, consisting of 80 universities and a research staff of over 60,000 scientists. The key advantage of the proposed approach is the ease of implementation. The algorithms are of practical application and have considerably better scalability and expandability properties than state-of-the-art unsupervised approaches. Moreover, the performance in terms of precision and recall, which can be further improved, seems thoroughly adequate for the typical needs of large-scale bibliometric research assessments

    The dispersion of research performance within and between universities as a potential indicator of the competitive intensity in higher education systems

    Full text link
    Higher education systems in competitive environments generally present top universities, that are able to attract top scientists, top students and public and private financing, with notable socio-economic benefits in their region. The same does not hold true for non-competitive systems. In this study we will measure the dispersion of research performance within and between universities in the Italian university system, typically non-competitive. We will also investigate the level of correlation that occurs between performance in research and its dispersion in universities. The findings may represent a first benchmark for similar studies in other nations. Furthermore, they lead to policy indications, questioning the effectiveness of selective funding of universities based on national research assessment exercises. The field of observation is composed of all Italian universities active in the hard sciences. Research performance will be evaluated using a bibliometric approach, through publications indexed in the Web of Science between 2004 and 2008

    National research assessment exercises: the effects of changing the rules of the game during the game

    Full text link
    National research evaluation exercises provide a comparative measure of research performance of the nation's institutions, and as such represent a tool for stimulating research productivity, particularly if the results are used to inform selective funding by government. While a school of thought welcomes frequent changes in evaluation criteria in order to prevent the subjects evaluated from adopting opportunistic behaviors, it is evident that the "rules of the game" should above all be functional towards policy objectives, and therefore be known with adequate forewarning prior to the evaluation period. Otherwise, the risk is that policy-makers will find themselves faced by a dilemma: should they reward universities that responded best to the criteria in effect at the outset of the observation period or those that result as best according to rules that emerged during or after the observation period? This study verifies if and to what extent some universities are penalized instead of rewarded for good behavior, in pursuit of the objectives of the "known" rules of the game, by comparing the research performances of Italian universities for the period of the nation's next evaluation exercise (2004-2008): first as measured according to criteria available at the outset of the period and next according to those announced at the end of the period

    Testing the trade-off between productivity and quality in research activities

    Full text link
    In recent years there has been an increasingly pressing need for the evaluation of results from public sector research activity, particularly to permit the efficient allocation of ever scarcer resources. Many of the studies and evaluation exercises that have been conducted at the national and international level emphasize the quality dimension of research output, while neglecting that of productivity. This work is intended to test for the possible existence of correlation between quantity and quality of scientific production and determine whether the most productive researchers are also those that achieve results that are qualitatively better than those of their colleagues. The analysis proposed refers to the entire Italian university system and is based on the observation of production in the hard sciences by above 26,000 researchers in the period 2001 to 2005. The results show that the output of more productive researchers is superior in quality than that of less productive researchers. The relation between productivity and quality results as largely insensitive to the types of indicators or the test methods applied and also seems to differ little among the various disciplines examined

    Evaluating research: from informed peer review to bibliometrics

    Full text link
    National research assessment exercises are becoming regular events in ever more countries. The present work contrasts the peer-review and bibliometrics approaches in the conduct of these exercises. The comparison is conducted in terms of the essential parameters of any measurement system: accuracy, robustness, validity, functionality, time and costs. Empirical evidence shows that for the natural and formal sciences, the bibliometric methodology is by far preferable to peer-review. Setting up national databases of publications by individual authors, derived from Web of Science or Scopus databases, would allow much better, cheaper and more frequent national research assessments
    • …
    corecore