7,333 research outputs found

    Publishing performance in economics: Spanish rankings (1990-1999).

    Get PDF
    This paper contributes to the growing literature that analyses the Spanish publishing performance in Economics throughout the 1990s. Several bibliometric indicators are used in order to provide Spanish rankings (of both institutions and individual authors) based on Econlit journals. Further, lists of the ten most influential authors and articles over that period, in terms of citations, are reported.Rankings; economics; Bibliometric indicators;

    A review of the characteristics of 108 author-level bibliometric indicators

    Get PDF
    An increasing demand for bibliometric assessment of individuals has led to a growth of new bibliometric indicators as well as new variants or combinations of established ones. The aim of this review is to contribute with objective facts about the usefulness of bibliometric indicators of the effects of publication activity at the individual level. This paper reviews 108 indicators that can potentially be used to measure performance on the individual author level, and examines the complexity of their calculations in relation to what they are supposed to reflect and ease of end-user application.Comment: to be published in Scientometrics, 201

    Evaluation of research activities of universities of Ukraine and Belarus: a set of bibliometric indicators and its implementation

    Get PDF
    Monitoring bibliometric indicators of University rankings is considered as a subject of a University library activity. In order to fulfill comparative assessment of research activities of the universities of Ukraine and Belarus the authors introduced a set of bibliometric indicators. A comparative assessment of the research activities of corresponding universities was fulfilled; the data on the leading universities are presented. The sensitivity of the one of the indicators to rapid changes of the research activity of universities and the fact that the other one is normalized across the fields of science condition advantage of the proposed set over the one that was used in practice of the corresponding national rankings

    Reviewers' ratings and bibliometric indicators: hand in hand when assessing over research proposals?

    Full text link
    The peer review system has been traditionally challenged due to its many limitations especially for allocating funding. Bibliometric indicators may well present themselves as a complement. Objective: We analyze the relationship between peers' ratings and bibliometric indicators for Spanish researchers in the 2007 National R&D Plan for 23 research fields. We analyze peers' ratings for 2333 applications. We also gathered principal investigators' research output and impact and studied the differences between accepted and rejected applications. We used the Web of Science database and focused on the 2002-2006 period. First, we analyzed the distribution of granted and rejected proposals considering a given set of bibliometric indicators to test if there are significant differences. Then, we applied a multiple logistic regression analysis to determine if bibliometric indicators can explain by themselves the concession of grant proposals. 63.4% of the applications were funded. Bibliometric indicators for accepted proposals showed a better previous performance than for those rejected; however the correlation between peer review and bibliometric indicators is very heterogeneous among most areas. The logistic regression analysis showed that the main bibliometric indicators that explain the granting of research proposals in most cases are the output (number of published articles) and the number of papers published in journals that belong to the first quartile ranking of the Journal Citations Report. Bibliometric indicators predict the concession of grant proposals at least as well as peer ratings. Social Sciences and Education are the only areas where no relation was found, although this may be due to the limitations of the Web of Science's coverage. These findings encourage the use of bibliometric indicators as a complement to peer review in most of the analyzed areas

    Quantitative Analysis of the Italian National Scientific Qualification

    Full text link
    The Italian National Scientific Qualification (ASN) was introduced in 2010 as part of a major reform of the national university system. Under the new regulation, the scientific qualification for a specific role (associate or full professor) and field of study is required to apply to a permanent professor position. The ASN is peculiar since it makes use of bibliometric indicators with associated thresholds as one of the parameters used to assess applicants. Overall, more than 59000 applications were submitted, and the results have been made publicly available for a short period of time, including the values of the quantitative indicators for each applicant. The availability of this wealth of information provides an opportunity to draw a fairly detailed picture of a nation-wide evaluation exercise, and to study the impact of the bibliometric indicators on the qualification results. In this paper we provide a first account of the Italian ASN from a quantitative point of view. We show that significant differences exist among scientific disciplines, in particular with respect to the fraction of qualified applicants, that can not be easily explained. Furthermore, we describe some issues related to the definition and use of the bibliometric indicators and thresholds. Our analysis aims at drawing attention to potential problems that should be addressed by decision-makers in future ASN rounds.Comment: ISSN 1751-157

    On the calculation of percentile-based bibliometric indicators

    Get PDF
    A percentile-based bibliometric indicator is an indicator that values publications based on their position within the citation distribution of their field. The most straightforward percentile-based indicator is the proportion of frequently cited publications, for instance the proportion of publications that belong to the top 10% most frequently cited of their field. Recently, more complex percentile-based indicators were proposed. A difficulty in the calculation of percentile-based indicators is caused by the discrete nature of citation distributions combined with the presence of many publications with the same number of citations. We introduce an approach to calculating percentile-based indicators that deals with this difficulty in a more satisfactory way than earlier approaches suggested in the literature. We show in a formal mathematical framework that our approach leads to indicators that do not suffer from biases in favor of or against particular fields of science

    Reviewers’ ratings and bibliometric indicators: hand in hand when assessing over research proposals?

    Get PDF
    The authors would like to thank Rodrigo Costas and Antonio Callaba de Roa for their helpful comments in previous version of this paper as well as the two anonymous reviewers for the constructive comments. We would also like to thank Bryan J. Robinson for revising the text. Nicolas Robinson-García is currently supported with a FPU grant from the Spanish government, Ministerio de Economía y Competitividad.Background: The peer review system has been traditionally challenged due to its many limitations especially for allocating funding. Bibliometric indicators may well present themselves as a complement. Objective: We analyze the relationship between peers' ratings and bibliometric indicators for Spanish researchers in the 2007 National R&D Plan for 23 research fields. Methods and materials: We analyze peers' ratings for 2333 applications. We also gathered principal investigators' research output and impact and studied the differences between accepted and rejected applications. We used the Web of Science database and focused on the 2002-2006 period. First, we analyzed the distribution of granted and rejected proposals considering a given set of bibliometric indicators to test if there are significant differences. Then, we applied a multiple logistic regression analysis to determine if bibliometric indicators can explain by themselves the concession of grant proposals. Results: 63.4% of the applications were funded. Bibliometric indicators for accepted proposals showed a better previous performance than for those rejected; however the correlation between peer review and bibliometric indicators is very heterogeneous among most areas. The logistic regression analysis showed that the main bibliometric indicators that explain the granting of research proposals in most cases are the output (number of published articles) and the number of papers published in journals that belong to the first quartile ranking of the Journal Citations Report. Discussion: Bibliometric indicators predict the concession of grant proposals at least as well as peer ratings. Social Sciences and Education are the only areas where no relation was found, although this may be due to the limitations of the Web of Science's coverage. These findings encourage the use of bibliometric indicators as a complement to peer review in most of the analyzed area
    corecore