2,124 research outputs found

    Ranking of library and information science researchers: Comparison of data sources for correlating citation data, and expert judgments

    Get PDF
    This paper studies the correlations between peer review and citation indicators when evaluating research quality in library and information science (LIS). Forty-two LIS experts provided judgments on a 5-point scale of the quality of research published by 101 scholars; the median rankings resulting from these judgments were then correlated with h-, g- and H-index values computed using three different sources of citation data: Web of Science (WoS), Scopus and Google Scholar (GS). The two variants of the basic h-index correlated more strongly with peer judgment than did the h-index itself; citation data from Scopus was more strongly correlated with the expert judgments than was data from GS, which in turn was more strongly correlated than data from WoS; correlations from a carefully cleaned version of GS data were little different from those obtained using swiftly gathered GS data; the indices from the citation databases resulted in broadly similar rankings of the LIS academics; GS disadvantaged researchers in bibliometrics compared to the other two citation database while WoS disadvantaged researchers in the more technical aspects of information retrieval; and experts from the UK and other European countries rated UK academics with higher scores than did experts from the USA. (C) 2010 Elsevier Ltd. All rights reserved

    Citations versus expert opinions: Citation analysis of Featured Reviews of the American Mathematical Society

    Get PDF
    Peer review and citation metrics are two means of gauging the value of scientific research, but the lack of publicly available peer review data makes the comparison of these methods difficult. Mathematics can serve as a useful laboratory for considering these questions because as an exact science, there is a narrow range of reasons for citations. In mathematics, virtually all published articles are post-publication reviewed by mathematicians in Mathematical Reviews (MathSciNet) and so the data set was essentially the Web of Science mathematics publications from 1993 to 2004. For a decade, especially important articles were singled out in Mathematical Reviews for featured reviews. In this study, we analyze the bibliometrics of elite articles selected by peer review and by citation count. We conclude that the two notions of significance described by being a featured review article and being highly cited are distinct. This indicates that peer review and citation counts give largely independent determinations of highly distinguished articles. We also consider whether hiring patterns of subfields and mathematicians' interest in subfields reflect subfields of featured review or highly cited articles. We reexamine data from two earlier studies in light of our methods for implications on the peer review/citation count relationship to a diversity of disciplines.Comment: 21 pages, 3 figures, 4 table

    Reviewers' ratings and bibliometric indicators: hand in hand when assessing over research proposals?

    Full text link
    The peer review system has been traditionally challenged due to its many limitations especially for allocating funding. Bibliometric indicators may well present themselves as a complement. Objective: We analyze the relationship between peers' ratings and bibliometric indicators for Spanish researchers in the 2007 National R&D Plan for 23 research fields. We analyze peers' ratings for 2333 applications. We also gathered principal investigators' research output and impact and studied the differences between accepted and rejected applications. We used the Web of Science database and focused on the 2002-2006 period. First, we analyzed the distribution of granted and rejected proposals considering a given set of bibliometric indicators to test if there are significant differences. Then, we applied a multiple logistic regression analysis to determine if bibliometric indicators can explain by themselves the concession of grant proposals. 63.4% of the applications were funded. Bibliometric indicators for accepted proposals showed a better previous performance than for those rejected; however the correlation between peer review and bibliometric indicators is very heterogeneous among most areas. The logistic regression analysis showed that the main bibliometric indicators that explain the granting of research proposals in most cases are the output (number of published articles) and the number of papers published in journals that belong to the first quartile ranking of the Journal Citations Report. Bibliometric indicators predict the concession of grant proposals at least as well as peer ratings. Social Sciences and Education are the only areas where no relation was found, although this may be due to the limitations of the Web of Science's coverage. These findings encourage the use of bibliometric indicators as a complement to peer review in most of the analyzed areas

    Reviewers’ ratings and bibliometric indicators: hand in hand when assessing over research proposals?

    Get PDF
    The authors would like to thank Rodrigo Costas and Antonio Callaba de Roa for their helpful comments in previous version of this paper as well as the two anonymous reviewers for the constructive comments. We would also like to thank Bryan J. Robinson for revising the text. Nicolas Robinson-García is currently supported with a FPU grant from the Spanish government, Ministerio de Economía y Competitividad.Background: The peer review system has been traditionally challenged due to its many limitations especially for allocating funding. Bibliometric indicators may well present themselves as a complement. Objective: We analyze the relationship between peers' ratings and bibliometric indicators for Spanish researchers in the 2007 National R&D Plan for 23 research fields. Methods and materials: We analyze peers' ratings for 2333 applications. We also gathered principal investigators' research output and impact and studied the differences between accepted and rejected applications. We used the Web of Science database and focused on the 2002-2006 period. First, we analyzed the distribution of granted and rejected proposals considering a given set of bibliometric indicators to test if there are significant differences. Then, we applied a multiple logistic regression analysis to determine if bibliometric indicators can explain by themselves the concession of grant proposals. Results: 63.4% of the applications were funded. Bibliometric indicators for accepted proposals showed a better previous performance than for those rejected; however the correlation between peer review and bibliometric indicators is very heterogeneous among most areas. The logistic regression analysis showed that the main bibliometric indicators that explain the granting of research proposals in most cases are the output (number of published articles) and the number of papers published in journals that belong to the first quartile ranking of the Journal Citations Report. Discussion: Bibliometric indicators predict the concession of grant proposals at least as well as peer ratings. Social Sciences and Education are the only areas where no relation was found, although this may be due to the limitations of the Web of Science's coverage. These findings encourage the use of bibliometric indicators as a complement to peer review in most of the analyzed area
    corecore