1,727 research outputs found

    Bias in the journal impact factor

    Full text link
    The ISI journal impact factor (JIF) is based on a sample that may represent half the whole-of-life citations to some journals, but a small fraction (<10%) of the citations accruing to other journals. This disproportionate sampling means that the JIF provides a misleading indication of the true impact of journals, biased in favour of journals that have a rapid rather than a prolonged impact. Many journals exhibit a consistent pattern of citation accrual from year to year, so it may be possible to adjust the JIF to provide a more reliable indication of a journal's impact.Comment: 9 pages, 8 figures; one reference correcte

    Rescaling citations of publications in physics

    Full text link
    We analyze the citation distributions of all papers published in Physical Review journals between 1985 and 2009. The average number of citations received by papers published in a given year and in a given field is computed. Large variations are found, showing that it is not fair to compare citation numbers across fields and years. However, when a rescaling procedure by the average is used, it is possible to compare impartially articles across years and fields. We make the rescaling factors available for use by the readers. We also show that rescaling citation numbers by the number of publication authors has strong effects and should therefore be taken into account when assessing the bibliometric performance of researchers.Comment: 8 pages, 10 figures, 1 tabl

    The success-index: an alternative approach to the h-index for evaluating an individual's research output

    Get PDF
    Among the most recent bibliometric indicators for normalizing the differences among fields of science in terms of citation behaviour, Kosmulski (J Informetr 5(3):481-485, 2011) proposed the NSP (number of successful paper) index. According to the authors, NSP deserves much attention for its great simplicity and immediate meaning— equivalent to those of the h-index—while it has the disadvantage of being prone to manipulation and not very efficient in terms of statistical significance. In the first part of the paper, we introduce the success-index, aimed at reducing the NSP-index's limitations, although requiring more computing effort. Next, we present a detailed analysis of the success-index from the point of view of its operational properties and a comparison with the h-index's ones. Particularly interesting is the examination of the success-index scale of measurement, which is much richer than the h-index's. This makes success-index much more versatile for different types of analysis—e.g., (cross-field) comparisons of the scientific output of (1) individual researchers, (2) researchers with different seniority, (3) research institutions of different size, (4) scientific journals, etc

    The substantive and practical significance of citation impact differences between institutions: Guidelines for the analysis of percentiles using effect sizes and confidence intervals

    Full text link
    In our chapter we address the statistical analysis of percentiles: How should the citation impact of institutions be compared? In educational and psychological testing, percentiles are already used widely as a standard to evaluate an individual's test scores - intelligence tests for example - by comparing them with the percentiles of a calibrated sample. Percentiles, or percentile rank classes, are also a very suitable method for bibliometrics to normalize citations of publications in terms of the subject category and the publication year and, unlike the mean-based indicators (the relative citation rates), percentiles are scarcely affected by skewed distributions of citations. The percentile of a certain publication provides information about the citation impact this publication has achieved in comparison to other similar publications in the same subject category and publication year. Analyses of percentiles, however, have not always been presented in the most effective and meaningful way. New APA guidelines (American Psychological Association, 2010) suggest a lesser emphasis on significance tests and a greater emphasis on the substantive and practical significance of findings. Drawing on work by Cumming (2012) we show how examinations of effect sizes (e.g. Cohen's d statistic) and confidence intervals can lead to a clear understanding of citation impact differences

    A Rejoinder on Energy versus Impact Indicators

    Get PDF
    Citation distributions are so skewed that using the mean or any other central tendency measure is ill-advised. Unlike G. Prathap's scalar measures (Energy, Exergy, and Entropy or EEE), the Integrated Impact Indicator (I3) is based on non-parametric statistics using the (100) percentiles of the distribution. Observed values can be tested against expected ones; impact can be qualified at the article level and then aggregated.Comment: Scientometrics, in pres

    Universality of Performance Indicators based on Citation and Reference Counts

    Full text link
    We find evidence for the universality of two relative bibliometric indicators of the quality of individual scientific publications taken from different data sets. One of these is a new index that considers both citation and reference counts. We demonstrate this universality for relatively well cited publications from a single institute, grouped by year of publication and by faculty or by department. We show similar behaviour in publications submitted to the arXiv e-print archive, grouped by year of submission and by sub-archive. We also find that for reasonably well cited papers this distribution is well fitted by a lognormal with a variance of around 1.3 which is consistent with the results of Radicchi, Fortunato, and Castellano (2008). Our work demonstrates that comparisons can be made between publications from different disciplines and publication dates, regardless of their citation count and without expensive access to the whole world-wide citation graph. Further, it shows that averages of the logarithm of such relative bibliometric indices deal with the issue of long tails and avoid the need for statistics based on lengthy ranking procedures.Comment: 15 pages, 14 figures, 11 pages of supplementary material. Submitted to Scientometric

    A comment to the paper by Waltman et al., Scientometrics, 87, 467–481, 2011

    Get PDF
    In reaction to a previous critique (Opthof and Leydesdorff, J Informetr 4(3):423–430, 2010), the Center for Science and Technology Studies (CWTS) in Leiden proposed to change their old “crown” indicator in citation analysis into a new one. Waltman (Scientometrics 87:467–481, 2011a) argue that this change does not affect rankings at various aggregated levels. However, CWTS data is not publicly available for testing and criticism. Therefore, we comment by using previously published data of Van Raan (Scientometrics 67(3):491–502, 2006) to address the pivotal issue of how the results of citation analysis correlate with the results of peer review. A quality parameter based on peer review was neither significantly correlated with the two parameters developed by the CWTS in the past citations per paper/mean journal citation score (CPP/JCSm) or CPP/FCSm (citations per paper/mean field citation score) nor with the more recently proposed h-index (Hirsch, Proc Natl Acad Sci USA 102(46):16569–16572, 2005). Given the high correlations between the old and new “crown” indicators, one can expect that the lack of correlation with the peer-review based quality indicator applies equally to the newly developed ones

    Articles by Latin American Authors in Prestigious Journals Have Fewer Citations

    Get PDF
    Background: the journal Impact factor (IF) is generally accepted to be a good measurement of the relevance/quality of articles that a journal publishes. in spite of an, apparently, homogenous peer-review process for a given journal, we hypothesize that the country affiliation of authors from developing Latin American (LA) countries affects the IF of a journal detrimentally.Methodology/Principal Findings: Seven prestigious international journals, one multidisciplinary journal and six serving specific branches of science, were examined in terms of their IF in the Web of Science. Two subsets of each journal were then selected to evaluate the influence of author's affiliation on the IF. They comprised contributions (i) with authorship from four Latin American (LA) countries (Argentina, Brazil, Chile and Mexico) and (ii) with authorship from five developed countries (England, France, Germany, Japan and USA). Both subsets were further subdivided into two groups: articles with authorship from one country only and collaborative articles with authorship from other countries. Articles from the five developed countries had IF close to the overall IF of the journals and the influence of collaboration on this value was minor. in the case of LA articles the effect of collaboration (virtually all with developed countries) was significant. the IFs for non-collaborative articles averaged 66% of the overall IF of the journals whereas the articles in collaboration raised the IFs to values close to the overall IF.Conclusion/Significance: the study shows a significantly lower IF in the group of the subsets of non-collaborative LA articles and thus that country affiliation of authors from non-developed LA countries does affect the IF of a journal detrimentally. There are no data to indicate whether the lower IFs of LA articles were due to their inherent inferior quality/relevance or psycho-social trend towards under-citation of articles from these countries. However, further study is required since there are foreseeable consequences of this trend as it may stimulate strategies by editors to turn down articles that tend to be under-cited.Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Latin Amer & Caribbean Ctr Hlth Sci Informat, BIREME PAHO WHO, São Paulo, BrazilUniversidade Federal de São Paulo, DIS Dept Informat Med, São Paulo, BrazilUniversidade Federal de São Paulo, DIS Dept Informat Med, São Paulo, BrazilFAPESP: 05/57665-8CNPq: 2006-0919Web of Scienc

    Metrics to evaluate research performance in academic institutions: A critique of ERA 2010 as applied in forestry and the indirect H2 index as a possible alternative

    Full text link
    Excellence for Research in Australia (ERA) is an attempt by the Australian Research Council to rate Australian universities on a 5-point scale within 180 Fields of Research using metrics and peer evaluation by an evaluation committee. Some of the bibliometric data contributing to this ranking suffer statistical issues associated with skewed distributions. Other data are standardised year-by-year, placing undue emphasis on the most recent publications which may not yet have reliable citation patterns. The bibliometric data offered to the evaluation committees is extensive, but lacks effective syntheses such as the h-index and its variants. The indirect H2 index is objective, can be computed automatically and efficiently, is resistant to manipulation, and a good indicator of impact to assist the ERA evaluation committees and to similar evaluations internationally.Comment: 19 pages, 6 figures, 7 tables, appendice

    Differences in citation frequency of clinical and basic science papers in cardiovascular research

    Get PDF
    In this article, a critical analysis is performed on differences in citation frequency of basic and clinical cardiovascular papers. It appears that the latter papers are cited at about 40% higher frequency. The differences between the largest number of citations of the most cited papers are even larger. It is also demonstrated that the groups of clinical and basic cardiovascular papers are also heterogeneous concerning citation frequency. It is concluded that none of the existing citation indicators appreciates these differences. At this moment these indicators should not be used for quality assessment of individual scientists and scientific niches with small numbers of scientists
    corecore