51,388 research outputs found

    Analysis of the Hirsch index's operational properties

    Get PDF
    The h-index is a relatively recent bibliometric indicator for assessing the research output of scientists, based on the publications and the corresponding citations. Due to the original characteristics of easy calculation and immediate intuitive meaning, this indicator has become very popular in the scientific community. Also, it received some criticism essentially because of its ‘‘low" accuracy. The contribution of this paper is to provide a detailed analysis of the h-index, from the point of view of the indicator operational properties. This work can be helpful to better understand the peculiarities and limits of h and avoid its misuse. Finally, we suggest an additional indicator ðf Þ that complements h with the information related to the publication age, not compromising the original simplicity and immediacy of understandin

    The Hirsch spectrum: a novel tool for analysing scientific journals

    Get PDF
    This paper introduces the Hirsch spectrum (h-spectrum) for analyzing the academic reputation of a scientific journal. h-Spectrum is a novel tool based on the Hirsch (h) index. It is easy to construct: considering a specific journal in a specific interval of time, h-spectrum is defined as the distribution representing the h-indexes associated to the authors of the journal articles. This tool allows defining a reference profile of the typical author of a journal, compare different journals within the same scientific field, and provide a rough indication of prestige/reputation of a journal in the scientific community. h-Spectrum can be associated to every journal. Ten specific journals in the Quality Engineering/Quality Management field are analyzed so as to preliminarily investigate the h-spectrum characteristic

    A framework for the measurement and prediction of an individual scientist's performance

    Full text link
    Quantitative bibliometric indicators are widely used to evaluate the performance of scientists. However, traditional indicators do not much rely on the analysis of the processes intended to measure and the practical goals of the measurement. In this study, I propose a simple framework to measure and predict an individual researcher's scientific performance that takes into account the main regularities of publication and citation processes and the requirements of practical tasks. Statistical properties of the new indicator - a scientist's personal impact rate - are illustrated by its application to a sample of Estonian researchers.Comment: 12 pages, 3 figure

    Benchmarking citation measures among the Australian education professoriate

    Get PDF
    Individual researchers and the organisations for which they work are interested in comparative measures of research performance for a variety of purposes. Such comparisons are facilitated by quantifiable measures that are easily obtained and offer convenience and a sense of objectivity. One popular measure is the Journal Impact Factor based on citation rates but it is a measure intended for journals rather than individuals. Moreover, educational research publications are not well represented in the databases most widely used for calculation of citation measures leading to doubts about the usefulness of such measures in education. Newer measures and data sources offer alternatives that provide wider representation of education research. However, research has shown that citation rates vary according to discipline and valid comparisons depend upon the availability of discipline specific benchmarks. This study sought to provide such benchmarks for Australian educational researchers based on analysis of citation measures obtained for the Australian education professoriate

    Scientific impact evaluation and the effect of self-citations: mitigating the bias by discounting h-index

    Full text link
    In this paper, we propose a measure to assess scientific impact that discounts self-citations and does not require any prior knowledge on the their distribution among publications. This index can be applied to both researchers and journals. In particular, we show that it fills the gap of h-index and similar measures that do not take into account the effect of self-citations for authors or journals impact evaluation. The paper provides with two real-world examples: in the former, we evaluate the research impact of the most productive scholars in Computer Science (according to DBLP); in the latter, we revisit the impact of the journals ranked in the 'Computer Science Applications' section of SCImago. We observe how self-citations, in many cases, affect the rankings obtained according to different measures (including h-index and ch-index), and show how the proposed measure mitigates this effect

    A review of the characteristics of 108 author-level bibliometric indicators

    Get PDF
    An increasing demand for bibliometric assessment of individuals has led to a growth of new bibliometric indicators as well as new variants or combinations of established ones. The aim of this review is to contribute with objective facts about the usefulness of bibliometric indicators of the effects of publication activity at the individual level. This paper reviews 108 indicators that can potentially be used to measure performance on the individual author level, and examines the complexity of their calculations in relation to what they are supposed to reflect and ease of end-user application.Comment: to be published in Scientometrics, 201

    Inter-field nonlinear transformation of journal impact indicators: The case of the h-index

    Full text link
    [EN] Impact indices used for joint evaluation of research items coming from different scientific fields must be comparable. Often a linear transformation -a normalization or another basic operation-is considered to be enough for providing the correct translation to a unified setting in which all the fields are adequately treated. In this paper it is shown that this is not always true. The attention is centered in the case of the h-index. It is proved that it that cannot be translated by means of direct normalization preserving its genuine meaning. According to the universality of citation distribution, it is shown that a slight variant of the h-index is necessary for this notion to produce comparable values when applied to different scientific fields. A complete example concerning a group of top scientists is shown.The first author was supported by Ministerio de Economia, Industria y Competitividad under Research Grant CSO2015-65594-C2-1R Y 2R (MINECO/FEDER, UE). The second author was suported by Ministerio de Economia, Industria y Competitividad and FEDER under Research Grant MTM2016-77054-C2-1-PFerrer Sapena, A.; Sánchez Pérez, EA. (2019). Inter-field nonlinear transformation of journal impact indicators: The case of the h-index. Journal of Interdisciplinary Mathematics. 22(2):177-199. https://doi.org/10.1080/09720502.2019.1616913S177199222Geuna, A., & Piolatto, M. (2016). Research assessment in the UK and Italy: Costly and difficult, but probably worth it (at least for a while). Research Policy, 45(1), 260-271. doi:10.1016/j.respol.2015.09.004Hicks, D. (2012). Performance-based university research funding systems. Research Policy, 41(2), 251-261. doi:10.1016/j.respol.2011.09.007Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences, 102(46), 16569-16572. doi:10.1073/pnas.0507655102Egghe, L. (2010). The Hirsch index and related impact measures. Annual Review of Information Science and Technology, 44(1), 65-114. doi:10.1002/aris.2010.1440440109Van Leeuwen, T. (2008). Testing the validity of the Hirsch-index for research assessment purposes. Research Evaluation, 17(2), 157-160. doi:10.3152/095820208x319175Alonso, S., Cabrerizo, F. J., Herrera-Viedma, E., & Herrera, F. (2009). h-Index: A review focused in its variants, computation and standardization for different scientific fields. Journal of Informetrics, 3(4), 273-289. doi:10.1016/j.joi.2009.04.001Imperial, J., & Rodríguez-Navarro, A. (2007). Usefulness of Hirsch’s h-index to evaluate scientific research in Spain. Scientometrics, 71(2), 271-282. doi:10.1007/s11192-007-1665-4Aoun, S. G., Bendok, B. R., Rahme, R. J., Dacey, R. G., & Batjer, H. H. (2013). Standardizing the Evaluation of Scientific and Academic Performance in Neurosurgery—Critical Review of the «h» Index and its Variants. World Neurosurgery, 80(5), e85-e90. doi:10.1016/j.wneu.2012.01.052Waltman, L., & van Eck, N. J. (2011). The inconsistency of the h-index. Journal of the American Society for Information Science and Technology, 63(2), 406-415. doi:10.1002/asi.21678Rousseau, R., García-Zorita, C., & Sanz-Casado, E. (2013). The h-bubble. Journal of Informetrics, 7(2), 294-300. doi:10.1016/j.joi.2012.11.012Burrell, Q. L. (2013). The h-index: A case of the tail wagging the dog? Journal of Informetrics, 7(4), 774-783. doi:10.1016/j.joi.2013.06.004Schreiber, M. (2013). How relevant is the predictive power of the h-index? A case study of the time-dependent Hirsch index. Journal of Informetrics, 7(2), 325-329. doi:10.1016/j.joi.2013.01.001Khan, N. R., Thompson, C. J., Taylor, D. R., Gabrick, K. S., Choudhri, A. F., Boop, F. R., & Klimo, P. (2013). Part II: Should the h-Index Be Modified? An Analysis of the m-Quotient, Contemporary h-Index, Authorship Value, and Impact Factor. World Neurosurgery, 80(6), 766-774. doi:10.1016/j.wneu.2013.07.011Schreiber, M. (2013). A case study of the arbitrariness of the h-index and the highly-cited-publications indicator. Journal of Informetrics, 7(2), 379-387. doi:10.1016/j.joi.2012.12.006Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520(7548), 429-431. doi:10.1038/520429aDienes, K. R. (2015). Completing h. Journal of Informetrics, 9(2), 385-397. doi:10.1016/j.joi.2015.01.003Ayaz, S., & Afzal, M. T. (2016). Identification of conversion factor for completing-h index for the field of mathematics. Scientometrics, 109(3), 1511-1524. doi:10.1007/s11192-016-2122-zWaltman, L. (2016). A review of the literature on citation impact indicators. Journal of Informetrics, 10(2), 365-391. doi:10.1016/j.joi.2016.02.007Van Eck, N. J., & Waltman, L. (2008). Generalizing the h- and g-indices. Journal of Informetrics, 2(4), 263-271. doi:10.1016/j.joi.2008.09.004Egghe, L., & Rousseau, R. (2008). An h-index weighted by citation impact. Information Processing & Management, 44(2), 770-780. doi:10.1016/j.ipm.2007.05.003Egghe, L. (2006). Theory and practise of the g-index. Scientometrics, 69(1), 131-152. doi:10.1007/s11192-006-0144-7Iglesias, J. E., & Pecharromán, C. (2007). Scaling the h-index for different scientific ISI fields. Scientometrics, 73(3), 303-320. doi:10.1007/s11192-007-1805-xEgghe, L. (2008). Examples of simple transformations of the h-index: Qualitative and quantitative conclusions and consequences for other indices. Journal of Informetrics, 2(2), 136-148. doi:10.1016/j.joi.2007.12.003Schreiber, M. (2015). Restricting the h-index to a publication and citation time window: A case study of a timed Hirsch index. Journal of Informetrics, 9(1), 150-155. doi:10.1016/j.joi.2014.12.00
    • …
    corecore