81 research outputs found

    A comment to the paper by Waltman et al., Scientometrics, 87, 467ā€“481, 2011

    Get PDF
    In reaction to a previous critique (Opthof and Leydesdorff, J Informetr 4(3):423ā€“430, 2010), the Center for Science and Technology Studies (CWTS) in Leiden proposed to change their old ā€œcrownā€ indicator in citation analysis into a new one. Waltman (Scientometrics 87:467ā€“481, 2011a) argue that this change does not affect rankings at various aggregated levels. However, CWTS data is not publicly available for testing and criticism. Therefore, we comment by using previously published data of Van Raan (Scientometrics 67(3):491ā€“502, 2006) to address the pivotal issue of how the results of citation analysis correlate with the results of peer review. A quality parameter based on peer review was neither significantly correlated with the two parameters developed by the CWTS in the past citations per paper/mean journal citation score (CPP/JCSm) or CPP/FCSm (citations per paper/mean field citation score) nor with the more recently proposed h-index (Hirsch, Proc Natl Acad Sci USA 102(46):16569ā€“16572, 2005). Given the high correlations between the old and new ā€œcrownā€ indicators, one can expect that the lack of correlation with the peer-review based quality indicator applies equally to the newly developed ones

    A Rejoinder on Energy versus Impact Indicators

    Get PDF
    Citation distributions are so skewed that using the mean or any other central tendency measure is ill-advised. Unlike G. Prathap's scalar measures (Energy, Exergy, and Entropy or EEE), the Integrated Impact Indicator (I3) is based on non-parametric statistics using the (100) percentiles of the distribution. Observed values can be tested against expected ones; impact can be qualified at the article level and then aggregated.Comment: Scientometrics, in pres

    On the correlation between bibliometric indicators and peer review: reply to Opthof and Leydesdorff

    Get PDF
    Opthof and Leydesdorff (Scientometrics, 2011) reanalyze data reported by Van Raan (Scientometrics 67(3):491ā€“502, 2006) and conclude that there is no significant correlation between on the one hand average citation scores measured using the CPP/FCSm indicator and on the other hand the quality judgment of peers. We point out that Opthof and Leydesdorff draw their conclusions based on a very limited amount of data. We also criticize the statistical methodology used by Opthof and Leydesdorff. Using a larger amount of data and a more appropriate statistical methodology, we do find a significant correlation between the CPP/FCSm indicator and peer judgment

    Are citations from clinical trials evidence of higher impact research? An analysis of ClinicalTrials.gov

    Get PDF
    An important way in which medical research can translate into improved health outcomes is by motivating or influencing clinical trials that eventually lead to changes in clinical practice. Citations from clinical trials records to academic research may therefore serve as an early warning of the likely future influence of the cited articles. This paper partially assesses this hypothesis by testing whether prior articles referenced in ClinicalTrials.gov records are more highly cited than average for the publishing journal. The results from four high profile general medical journals support the hypothesis, although there may not be a cause-and effect relationship. Nevertheless, it is reasonable for researchers to use citations to their work from clinical trials records as partial evidence of the possible long-term impact of their research

    Are Mendeley Reader Counts Useful Impact Indicators in all Fields?

    Get PDF
    Reader counts from the social reference sharing site Mendeley are known to be valuable for early research evaluation. They have strong correlations with citation counts for journal articles but appear about a year before them. There are disciplinary differences in the value of Mendeley reader counts but systematic evidence is needed at the level of narrow fields to reveal its extent. In response, this article compares Mendeley reader counts with Scopus citation counts for journal articles from 2012 in 325 narrow Scopus fields. Despite strong positive correlations in most fields, averaging 0.671, the correlations in some fields are as weak as 0.255. Technical reasons explain most weaker correlations, suggesting that the underlying relationship is almost always strong. The exceptions are caused by unusually high educational or professional use or topics of interest within countries that avoid Mendeley. The findings suggest that if care is taken then Mendeley reader counts can be used for early citation impact evidence in almost all fields and for related impact in some of the remainder. As an additional application of the results, cross-checking with Mendeley data can be used to identify indexing anomalies in citation databases

    Confidence intervals for normalised citation counts: Can they delimit underlying research capability?

    Get PDF
    This is an accepted manuscript of an article published by Elsevier in Journal of Informetrics on 24/10/2017, available online: https://doi.org/10.1016/j.joi.2017.09.002 The accepted version of the publication may differ from the final published version.Normalised citation counts are routinely used to assess the average impact of research groups or nations. There is controversy over whether confidence intervals for them are theoretically valid or practically useful. In response, this article introduces the concept of a groupā€™s underlying research capability to produce impactful research. It then investigates whether confidence intervals could delimit the underlying capability of a group in practice. From 123120 confidence interval comparisons for the average citation impact of the national outputs of ten countries within 36 individual large monodisciplinary journals, moderately fewer than 95% of subsequent indicator values fall within 95% confidence intervals from prior years, with the percentage declining over time. This is consistent with confidence intervals effectively delimiting the research capability of a group, although it does not prove that this is the cause of the results. The results are unaffected by whether internationally collaborative articles are included
    • ā€¦
    corecore