8,635 research outputs found

    Inter-field nonlinear transformation of journal impact indicators: The case of the h-index

    Full text link
    [EN] Impact indices used for joint evaluation of research items coming from different scientific fields must be comparable. Often a linear transformation -a normalization or another basic operation-is considered to be enough for providing the correct translation to a unified setting in which all the fields are adequately treated. In this paper it is shown that this is not always true. The attention is centered in the case of the h-index. It is proved that it that cannot be translated by means of direct normalization preserving its genuine meaning. According to the universality of citation distribution, it is shown that a slight variant of the h-index is necessary for this notion to produce comparable values when applied to different scientific fields. A complete example concerning a group of top scientists is shown.The first author was supported by Ministerio de Economia, Industria y Competitividad under Research Grant CSO2015-65594-C2-1R Y 2R (MINECO/FEDER, UE). The second author was suported by Ministerio de Economia, Industria y Competitividad and FEDER under Research Grant MTM2016-77054-C2-1-PFerrer Sapena, A.; Sánchez Pérez, EA. (2019). Inter-field nonlinear transformation of journal impact indicators: The case of the h-index. Journal of Interdisciplinary Mathematics. 22(2):177-199. https://doi.org/10.1080/09720502.2019.1616913S177199222Geuna, A., & Piolatto, M. (2016). Research assessment in the UK and Italy: Costly and difficult, but probably worth it (at least for a while). Research Policy, 45(1), 260-271. doi:10.1016/j.respol.2015.09.004Hicks, D. (2012). Performance-based university research funding systems. Research Policy, 41(2), 251-261. doi:10.1016/j.respol.2011.09.007Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences, 102(46), 16569-16572. doi:10.1073/pnas.0507655102Egghe, L. (2010). The Hirsch index and related impact measures. Annual Review of Information Science and Technology, 44(1), 65-114. doi:10.1002/aris.2010.1440440109Van Leeuwen, T. (2008). Testing the validity of the Hirsch-index for research assessment purposes. Research Evaluation, 17(2), 157-160. doi:10.3152/095820208x319175Alonso, S., Cabrerizo, F. J., Herrera-Viedma, E., & Herrera, F. (2009). h-Index: A review focused in its variants, computation and standardization for different scientific fields. Journal of Informetrics, 3(4), 273-289. doi:10.1016/j.joi.2009.04.001Imperial, J., & Rodríguez-Navarro, A. (2007). Usefulness of Hirsch’s h-index to evaluate scientific research in Spain. Scientometrics, 71(2), 271-282. doi:10.1007/s11192-007-1665-4Aoun, S. G., Bendok, B. R., Rahme, R. J., Dacey, R. G., & Batjer, H. H. (2013). Standardizing the Evaluation of Scientific and Academic Performance in Neurosurgery—Critical Review of the «h» Index and its Variants. World Neurosurgery, 80(5), e85-e90. doi:10.1016/j.wneu.2012.01.052Waltman, L., & van Eck, N. J. (2011). The inconsistency of the h-index. Journal of the American Society for Information Science and Technology, 63(2), 406-415. doi:10.1002/asi.21678Rousseau, R., García-Zorita, C., & Sanz-Casado, E. (2013). The h-bubble. Journal of Informetrics, 7(2), 294-300. doi:10.1016/j.joi.2012.11.012Burrell, Q. L. (2013). The h-index: A case of the tail wagging the dog? Journal of Informetrics, 7(4), 774-783. doi:10.1016/j.joi.2013.06.004Schreiber, M. (2013). How relevant is the predictive power of the h-index? A case study of the time-dependent Hirsch index. Journal of Informetrics, 7(2), 325-329. doi:10.1016/j.joi.2013.01.001Khan, N. R., Thompson, C. J., Taylor, D. R., Gabrick, K. S., Choudhri, A. F., Boop, F. R., & Klimo, P. (2013). Part II: Should the h-Index Be Modified? An Analysis of the m-Quotient, Contemporary h-Index, Authorship Value, and Impact Factor. World Neurosurgery, 80(6), 766-774. doi:10.1016/j.wneu.2013.07.011Schreiber, M. (2013). A case study of the arbitrariness of the h-index and the highly-cited-publications indicator. Journal of Informetrics, 7(2), 379-387. doi:10.1016/j.joi.2012.12.006Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520(7548), 429-431. doi:10.1038/520429aDienes, K. R. (2015). Completing h. Journal of Informetrics, 9(2), 385-397. doi:10.1016/j.joi.2015.01.003Ayaz, S., & Afzal, M. T. (2016). Identification of conversion factor for completing-h index for the field of mathematics. Scientometrics, 109(3), 1511-1524. doi:10.1007/s11192-016-2122-zWaltman, L. (2016). A review of the literature on citation impact indicators. Journal of Informetrics, 10(2), 365-391. doi:10.1016/j.joi.2016.02.007Van Eck, N. J., & Waltman, L. (2008). Generalizing the h- and g-indices. Journal of Informetrics, 2(4), 263-271. doi:10.1016/j.joi.2008.09.004Egghe, L., & Rousseau, R. (2008). An h-index weighted by citation impact. Information Processing & Management, 44(2), 770-780. doi:10.1016/j.ipm.2007.05.003Egghe, L. (2006). Theory and practise of the g-index. Scientometrics, 69(1), 131-152. doi:10.1007/s11192-006-0144-7Iglesias, J. E., & Pecharromán, C. (2007). Scaling the h-index for different scientific ISI fields. Scientometrics, 73(3), 303-320. doi:10.1007/s11192-007-1805-xEgghe, L. (2008). Examples of simple transformations of the h-index: Qualitative and quantitative conclusions and consequences for other indices. Journal of Informetrics, 2(2), 136-148. doi:10.1016/j.joi.2007.12.003Schreiber, M. (2015). Restricting the h-index to a publication and citation time window: A case study of a timed Hirsch index. Journal of Informetrics, 9(1), 150-155. doi:10.1016/j.joi.2014.12.00

    Methods for measuring the citations and productivity of scientists across time and discipline

    Get PDF
    Publication statistics are ubiquitous in the ratings of scientific achievement, with citation counts and paper tallies factoring into an individual's consideration for postdoctoral positions, junior faculty, tenure, and even visa status for international scientists. Citation statistics are designed to quantify individual career achievement, both at the level of a single publication, and over an individual's entire career. While some academic careers are defined by a few significant papers (possibly out of many), other academic careers are defined by the cumulative contribution made by the author's publications to the body of science. Several metrics have been formulated to quantify an individual's publication career, yet none of these metrics account for the dependence of citation counts and journal size on time. In this paper, we normalize publication metrics across both time and discipline in order to achieve a universal framework for analyzing and comparing scientific achievement. We study the publication careers of individual authors over the 50-year period 1958-2008 within six high-impact journals: CELL, the New England Journal of Medicine (NEJM), Nature, the Proceedings of the National Academy of Science (PNAS), Physical Review Letters (PRL), and Science. In comparing the achievement of authors within each journal, we uncover quantifiable statistical regularity in the probability density function (pdf) of scientific achievement across both time and discipline. The universal distribution of career success within these arenas for publication raises the possibility that a fundamental driving force underlying scientific achievement is the competitive nature of scientific advancement.Comment: 25 pages in 1 Column Preprint format, 7 Figures, 4 Tables. Version II: changes made in response to referee comments. Note: change in definition of "Paper shares.

    Universality of citation distributions: towards an objective measure of scientific impact

    Full text link
    We study the distributions of citations received by a single publication within several disciplines, spanning broad areas of science. We show that the probability that an article is cited cc times has large variations between different disciplines, but all distributions are rescaled on a universal curve when the relative indicator cf=c/c0c_f=c/c_0 is considered, where c0c_0 is the average number of citations per article for the discipline. In addition we show that the same universal behavior occurs when citation distributions of articles published in the same field, but in different years, are compared. These findings provide a strong validation of cfc_f as an unbiased indicator for citation performance across disciplines and years. Based on this indicator, we introduce a generalization of the h-index suitable for comparing scientists working in different fields.Comment: 7 pages, 5 figures. accepted for publication in Proc. Natl Acad. Sci. US

    The structure of the Arts & Humanities Citation Index: A mapping on the basis of aggregated citations among 1,157 journals

    Full text link
    Using the Arts & Humanities Citation Index (A&HCI) 2008, we apply mapping techniques previously developed for mapping journal structures in the Science and Social Science Citation Indices. Citation relations among the 110,718 records were aggregated at the level of 1,157 journals specific to the A&HCI, and the journal structures are questioned on whether a cognitive structure can be reconstructed and visualized. Both cosine-normalization (bottom up) and factor analysis (top down) suggest a division into approximately twelve subsets. The relations among these subsets are explored using various visualization techniques. However, we were not able to retrieve this structure using the ISI Subject Categories, including the 25 categories which are specific to the A&HCI. We discuss options for validation such as against the categories of the Humanities Indicators of the American Academy of Arts and Sciences, the panel structure of the European Reference Index for the Humanities (ERIH), and compare our results with the curriculum organization of the Humanities Section of the College of Letters and Sciences of UCLA as an example of institutional organization

    Scientific impact evaluation and the effect of self-citations: mitigating the bias by discounting h-index

    Full text link
    In this paper, we propose a measure to assess scientific impact that discounts self-citations and does not require any prior knowledge on the their distribution among publications. This index can be applied to both researchers and journals. In particular, we show that it fills the gap of h-index and similar measures that do not take into account the effect of self-citations for authors or journals impact evaluation. The paper provides with two real-world examples: in the former, we evaluate the research impact of the most productive scholars in Computer Science (according to DBLP); in the latter, we revisit the impact of the journals ranked in the 'Computer Science Applications' section of SCImago. We observe how self-citations, in many cases, affect the rankings obtained according to different measures (including h-index and ch-index), and show how the proposed measure mitigates this effect

    A review of the characteristics of 108 author-level bibliometric indicators

    Get PDF
    An increasing demand for bibliometric assessment of individuals has led to a growth of new bibliometric indicators as well as new variants or combinations of established ones. The aim of this review is to contribute with objective facts about the usefulness of bibliometric indicators of the effects of publication activity at the individual level. This paper reviews 108 indicators that can potentially be used to measure performance on the individual author level, and examines the complexity of their calculations in relation to what they are supposed to reflect and ease of end-user application.Comment: to be published in Scientometrics, 201
    corecore