448 research outputs found

    Universality of Performance Indicators based on Citation and Reference Counts

    Full text link
    We find evidence for the universality of two relative bibliometric indicators of the quality of individual scientific publications taken from different data sets. One of these is a new index that considers both citation and reference counts. We demonstrate this universality for relatively well cited publications from a single institute, grouped by year of publication and by faculty or by department. We show similar behaviour in publications submitted to the arXiv e-print archive, grouped by year of submission and by sub-archive. We also find that for reasonably well cited papers this distribution is well fitted by a lognormal with a variance of around 1.3 which is consistent with the results of Radicchi, Fortunato, and Castellano (2008). Our work demonstrates that comparisons can be made between publications from different disciplines and publication dates, regardless of their citation count and without expensive access to the whole world-wide citation graph. Further, it shows that averages of the logarithm of such relative bibliometric indices deal with the issue of long tails and avoid the need for statistics based on lengthy ranking procedures.Comment: 15 pages, 14 figures, 11 pages of supplementary material. Submitted to Scientometric

    On the analogy between the evolution of thermodynamicand bibliometric systems: a breakthrough or justa bubble?

    Get PDF
    This paper presents an in depth study of an interesting analogy, recently proposed by Prathap (Scientometrics 87(3):515-524, 2011a), between the evolution of thermodynamic and bibliometric systems. The goal is to highlight some weaknesses and clarify some ‘‘dark sides'' in the conceptual framework of this analogy, discussing the formal validity and practical meaning of the concepts of Energy, Exergy and Entropy in bibliometrics. Specifically, this analogy highlights the following major criticalities: (1) the definitions of E and X are controversial, (2) the equivalence classes of E and X are questionable, (3) the parallel between the evolution of thermodynamic and bibliometric systems is forced, (4) X is a non-monotonic performance indicator, and (5) in bibliometrics the condition of ‘‘thermodynamic perfection'' is questionable. Argument is supported by many analytical demonstrations and practical example

    A review of the characteristics of 108 author-level bibliometric indicators

    Get PDF
    An increasing demand for bibliometric assessment of individuals has led to a growth of new bibliometric indicators as well as new variants or combinations of established ones. The aim of this review is to contribute with objective facts about the usefulness of bibliometric indicators of the effects of publication activity at the individual level. This paper reviews 108 indicators that can potentially be used to measure performance on the individual author level, and examines the complexity of their calculations in relation to what they are supposed to reflect and ease of end-user application.Comment: to be published in Scientometrics, 201

    Bibliometric scoring of an individual’s research output in science and engineering

    Get PDF
    The relevance of various citation metrics used for parameterization of the research outputs of scientists is reviewed. The rationale of judging the performance of scientists on the basis of the total number of research papers published, the total citations received for these papers or the average citation reckoning per paper has often been criticized. The significance of impact factor of journals in which the papers have appeared has also been debated. The h-index introduced by Jorge E. Hirsch in 2005 has gained some acceptance in this regard but its value is highly dependent on the academic discipline concerned and also varies across sub-disciplines. Because citation practices exhibit wide variations among different fields, a scientist working in a particular discipline need not be disheartened with a low h-index as compared to fellow scientists of a different discipline. The h-index has been successful in assessing the performance of scientists of the same field and at the same stage of their careers. By appropriately scaling the discipline-dependence of h-index, it has also enabled comparison among those working in different disciplines, serving as a simplified, robust, intelligible measure. Several metrics proposed to overcome the flaws of h-index are briefly described

    A causational analysis of scholars' years of active academic careers vis-à-vis their academic productivity and academic influence

    Get PDF
    Taking the scholarly activities of 73 doctoral program mentors working at the Chinese Academy of Medical Sciences &amp; Peking Union Medical College (the CAMS &amp; PUMC) as a sample of our investigative survey, we tried using such statistical methods as the analysis of variance (ANOVA), factor analysis and correlation analysis to compare the different characteristics of scholarship assessment of Chinese medical scholars as exhibited in their published papers in domestic and foreign journals. Our research findings show that citations per paper and A-index are more suitable for assessing the highly accomplished senior Chinese medical professionals (e.g. academicians) for their domestic and international scholarship attainment. In contrast, the m-quotient is not deemed appropriate to assess their academic influence both at home and abroad. Upon our further analysis of 6 evaluative indicators, we noticed that these indicators might be applied in two different aspects: One is from the viewpoint of Chinese scholars' academic influence at home, which has been evaluated mainly from the perspective of &quot;total&quot; amount and &quot;average&quot; amount of both publications and citations. The other is from their academic impact embodied by the means of documents retrieved from the Web of Science, which is mainly assessed from the two viewpoints of publications and citations. It is suggested that the accumulated time-length of a given scholar's active engagement in professional practice in a specific subject area be taken into consideration while assessing a researcher's performance at home and abroad</p

    Quantifying Success in Science: An Overview

    Get PDF
    Quantifying success in science plays a key role in guiding funding allocations, recruitment decisions, and rewards. Recently, a significant amount of progresses have been made towards quantifying success in science. This lack of detailed analysis and summary continues a practical issue. The literature reports the factors influencing scholarly impact and evaluation methods and indices aimed at overcoming this crucial weakness. We focus on categorizing and reviewing the current development on evaluation indices of scholarly impact, including paper impact, scholar impact, and journal impact. Besides, we summarize the issues of existing evaluation methods and indices, investigate the open issues and challenges, and provide possible solutions, including the pattern of collaboration impact, unified evaluation standards, implicit success factor mining, dynamic academic network embedding, and scholarly impact inflation. This paper should help the researchers obtaining a broader understanding of quantifying success in science, and identifying some potential research directions

    Inter-field nonlinear transformation of journal impact indicators: The case of the h-index

    Full text link
    [EN] Impact indices used for joint evaluation of research items coming from different scientific fields must be comparable. Often a linear transformation -a normalization or another basic operation-is considered to be enough for providing the correct translation to a unified setting in which all the fields are adequately treated. In this paper it is shown that this is not always true. The attention is centered in the case of the h-index. It is proved that it that cannot be translated by means of direct normalization preserving its genuine meaning. According to the universality of citation distribution, it is shown that a slight variant of the h-index is necessary for this notion to produce comparable values when applied to different scientific fields. A complete example concerning a group of top scientists is shown.The first author was supported by Ministerio de Economia, Industria y Competitividad under Research Grant CSO2015-65594-C2-1R Y 2R (MINECO/FEDER, UE). The second author was suported by Ministerio de Economia, Industria y Competitividad and FEDER under Research Grant MTM2016-77054-C2-1-PFerrer Sapena, A.; Sánchez Pérez, EA. (2019). Inter-field nonlinear transformation of journal impact indicators: The case of the h-index. Journal of Interdisciplinary Mathematics. 22(2):177-199. https://doi.org/10.1080/09720502.2019.1616913S177199222Geuna, A., & Piolatto, M. (2016). Research assessment in the UK and Italy: Costly and difficult, but probably worth it (at least for a while). Research Policy, 45(1), 260-271. doi:10.1016/j.respol.2015.09.004Hicks, D. (2012). Performance-based university research funding systems. Research Policy, 41(2), 251-261. doi:10.1016/j.respol.2011.09.007Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences, 102(46), 16569-16572. doi:10.1073/pnas.0507655102Egghe, L. (2010). The Hirsch index and related impact measures. Annual Review of Information Science and Technology, 44(1), 65-114. doi:10.1002/aris.2010.1440440109Van Leeuwen, T. (2008). Testing the validity of the Hirsch-index for research assessment purposes. Research Evaluation, 17(2), 157-160. doi:10.3152/095820208x319175Alonso, S., Cabrerizo, F. J., Herrera-Viedma, E., & Herrera, F. (2009). h-Index: A review focused in its variants, computation and standardization for different scientific fields. Journal of Informetrics, 3(4), 273-289. doi:10.1016/j.joi.2009.04.001Imperial, J., & Rodríguez-Navarro, A. (2007). Usefulness of Hirsch’s h-index to evaluate scientific research in Spain. Scientometrics, 71(2), 271-282. doi:10.1007/s11192-007-1665-4Aoun, S. G., Bendok, B. R., Rahme, R. J., Dacey, R. G., & Batjer, H. H. (2013). Standardizing the Evaluation of Scientific and Academic Performance in Neurosurgery—Critical Review of the «h» Index and its Variants. World Neurosurgery, 80(5), e85-e90. doi:10.1016/j.wneu.2012.01.052Waltman, L., & van Eck, N. J. (2011). The inconsistency of the h-index. Journal of the American Society for Information Science and Technology, 63(2), 406-415. doi:10.1002/asi.21678Rousseau, R., García-Zorita, C., & Sanz-Casado, E. (2013). The h-bubble. Journal of Informetrics, 7(2), 294-300. doi:10.1016/j.joi.2012.11.012Burrell, Q. L. (2013). The h-index: A case of the tail wagging the dog? Journal of Informetrics, 7(4), 774-783. doi:10.1016/j.joi.2013.06.004Schreiber, M. (2013). How relevant is the predictive power of the h-index? A case study of the time-dependent Hirsch index. Journal of Informetrics, 7(2), 325-329. doi:10.1016/j.joi.2013.01.001Khan, N. R., Thompson, C. J., Taylor, D. R., Gabrick, K. S., Choudhri, A. F., Boop, F. R., & Klimo, P. (2013). Part II: Should the h-Index Be Modified? An Analysis of the m-Quotient, Contemporary h-Index, Authorship Value, and Impact Factor. World Neurosurgery, 80(6), 766-774. doi:10.1016/j.wneu.2013.07.011Schreiber, M. (2013). A case study of the arbitrariness of the h-index and the highly-cited-publications indicator. Journal of Informetrics, 7(2), 379-387. doi:10.1016/j.joi.2012.12.006Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520(7548), 429-431. doi:10.1038/520429aDienes, K. R. (2015). Completing h. Journal of Informetrics, 9(2), 385-397. doi:10.1016/j.joi.2015.01.003Ayaz, S., & Afzal, M. T. (2016). Identification of conversion factor for completing-h index for the field of mathematics. Scientometrics, 109(3), 1511-1524. doi:10.1007/s11192-016-2122-zWaltman, L. (2016). A review of the literature on citation impact indicators. Journal of Informetrics, 10(2), 365-391. doi:10.1016/j.joi.2016.02.007Van Eck, N. J., & Waltman, L. (2008). Generalizing the h- and g-indices. Journal of Informetrics, 2(4), 263-271. doi:10.1016/j.joi.2008.09.004Egghe, L., & Rousseau, R. (2008). An h-index weighted by citation impact. Information Processing & Management, 44(2), 770-780. doi:10.1016/j.ipm.2007.05.003Egghe, L. (2006). Theory and practise of the g-index. Scientometrics, 69(1), 131-152. doi:10.1007/s11192-006-0144-7Iglesias, J. E., & Pecharromán, C. (2007). Scaling the h-index for different scientific ISI fields. Scientometrics, 73(3), 303-320. doi:10.1007/s11192-007-1805-xEgghe, L. (2008). Examples of simple transformations of the h-index: Qualitative and quantitative conclusions and consequences for other indices. Journal of Informetrics, 2(2), 136-148. doi:10.1016/j.joi.2007.12.003Schreiber, M. (2015). Restricting the h-index to a publication and citation time window: A case study of a timed Hirsch index. Journal of Informetrics, 9(1), 150-155. doi:10.1016/j.joi.2014.12.00
    • …
    corecore