313 research outputs found
A Rejoinder on Energy versus Impact Indicators
Citation distributions are so skewed that using the mean or any other central
tendency measure is ill-advised. Unlike G. Prathap's scalar measures (Energy,
Exergy, and Entropy or EEE), the Integrated Impact Indicator (I3) is based on
non-parametric statistics using the (100) percentiles of the distribution.
Observed values can be tested against expected ones; impact can be qualified at
the article level and then aggregated.Comment: Scientometrics, in pres
The assessment of science: the relative merits of post- publication review, the impact factor, and the number of citations
The assessment of scientific publications is an integral part of the scientific process. Here we investigate three methods of assessing the merit of a scientific paper: subjective post-publication peer review, the number of citations gained by a paper, and the impact factor of the journal in which the article was published. We investigate these methods using two datasets in which subjective post-publication assessments of scientific publications have been made by experts. We find that there are moderate, but statistically significant, correlations between assessor scores, when two assessors have rated the same paper, and between assessor score and the number of citations a paper accrues. However, we show that assessor score depends strongly on the journal in which the paper is published, and that assessors tend to over-rate papers published in journals with high impact factors. If we control for this bias, we find that the correlation between assessor scores and between assessor score and the number of citations is weak, suggesting that scientists have little ability to judge either the intrinsic merit of a paper or its likely impact. We also show that the number of citations a paper receives is an extremely error-prone measure of scientific merit. Finally, we argue that the impact factor is likely to be a poor measure of merit, since it depends on subjective assessment. We conclude that the three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased, and expensive method by which to assess merit. We argue that the impact factor may be the most satisfactory of the methods we have considered, since it is a form of pre-publication review. However, we emphasise that it is likely to be a very error-prone measure of merit that is qualitative, not quantitative
Bibliometric data in clinical cardiology revisited. The case of 37 Dutch professors
In this paper, we assess the bibliometric parameters of 37 Dutch professors in clinical cardiology. Those are the Hirsch index (h-index) based on all papers, the h-index based on first authored papers, the number of papers, the number of citations and the citations per paper. A top 10 for each of the five parameters was compiled. In theory, the same 10 professors might appear in each of these top 10s. Alternatively, each of the 37 professors under assessment could appear one or more times. In practice, we found 22 out of these 37 professors in the 5 top 10s. Thus, there is no golden parameter. In addition, there is too much inhomogeneity in citation characteristics even within a relatively homogeneous group of clinical cardiologists. Therefore, citation analysis should be applied with great care in science policy. This is even more important when different fields of medicine are compared in university medical centres. It may be possible to develop better parameters in the future, but the present ones are simply not good enough. Also, we observed a quite remarkable explosion of publications per author which can, paradoxical as it may sound, probably not be interpreted as an increase in productivity of scientists, but as the effect of an increase in the number of co-authors and the strategic effect of networks
The substantive and practical significance of citation impact differences between institutions: Guidelines for the analysis of percentiles using effect sizes and confidence intervals
In our chapter we address the statistical analysis of percentiles: How should
the citation impact of institutions be compared? In educational and
psychological testing, percentiles are already used widely as a standard to
evaluate an individual's test scores - intelligence tests for example - by
comparing them with the percentiles of a calibrated sample. Percentiles, or
percentile rank classes, are also a very suitable method for bibliometrics to
normalize citations of publications in terms of the subject category and the
publication year and, unlike the mean-based indicators (the relative citation
rates), percentiles are scarcely affected by skewed distributions of citations.
The percentile of a certain publication provides information about the citation
impact this publication has achieved in comparison to other similar
publications in the same subject category and publication year. Analyses of
percentiles, however, have not always been presented in the most effective and
meaningful way. New APA guidelines (American Psychological Association, 2010)
suggest a lesser emphasis on significance tests and a greater emphasis on the
substantive and practical significance of findings. Drawing on work by Cumming
(2012) we show how examinations of effect sizes (e.g. Cohen's d statistic) and
confidence intervals can lead to a clear understanding of citation impact
differences
Impact factor 2013 of the Netherlands Heart Journal surpasses 2.0
The impact factor of the Netherlands Heart Journal was stable at about 1.4 between 2009 and 2012. In 2013 it will break through the 2.0 barrier for the first time
Differences in citation frequency of clinical and basic science papers in cardiovascular research
In this article, a critical analysis is performed on differences in citation frequency of basic and clinical cardiovascular papers. It appears that the latter papers are cited at about 40% higher frequency. The differences between the largest number of citations of the most cited papers are even larger. It is also demonstrated that the groups of clinical and basic cardiovascular papers are also heterogeneous concerning citation frequency. It is concluded that none of the existing citation indicators appreciates these differences. At this moment these indicators should not be used for quality assessment of individual scientists and scientific niches with small numbers of scientists
Bibliometrics of systematic reviews : analysis of citation rates and journal impact factors
Background:
Systematic reviews are important for informing clinical practice and health policy. The aim of this study was to examine the bibliometrics of systematic reviews and to determine the amount of variance in citations predicted by the journal impact factor (JIF) alone and combined with several other characteristics.
Methods:
We conducted a bibliometric analysis of 1,261 systematic reviews published in 2008 and the citations to them in the Scopus database from 2008 to June 2012. Potential predictors of the citation impact of the reviews were examined using descriptive, univariate and multiple regression analysis.
Results:
The mean number of citations per review over four years was 26.5 (SD +/-29.9) or 6.6 citations per review per year. The mean JIF of the journals in which the reviews were published was 4.3 (SD +/-4.2). We found that 17% of the reviews accounted for 50% of the total citations and 1.6% of the reviews were not cited. The number of authors was correlated with the number of citations (r = 0.215, P =5.16) received citations in the bottom quartile (eight or fewer), whereas 9% of reviews published in the lowest JIF quartile (<=2.06) received citations in the top quartile (34 or more). Six percent of reviews in journals with no JIF were also in the first quartile of citations.
Conclusions:
The JIF predicted over half of the variation in citations to the systematic reviews. However, the distribution of citations was markedly skewed. Some reviews in journals with low JIFs were well-cited and others in higher JIF journals received relatively few citations; hence the JIF did not accurately represent the number of citations to individual systematic reviews
- …
