371 research outputs found

    Impact factors of dermatological journals for 1991 – 2000

    Get PDF
    BACKGROUND: The impact factors of scientific journals are interesting but not unproblematic. It is speculated that the number of journals in which citations can be made correlates with the impact factors in any given speciality. METHODS: Using the Journal Citation Report (JCR) for 1997, a bibliometric analysis was made to assess the correlation between the number of journals available in different fields of clinical medicine and the top impact factor. A detailed study was made of dermatological journals listed in the JCR 1991–2000, to assess the relevance of this general survey. RESULTS: Using the 1997 JCR definitions of speciality journals, a significant linear correlation was found between the number of journals in a given field and the top impact factor of that field (rs = 0.612, p < 0.05). Studying the trend for dermatological journals 1991 to 2000 a similar pattern was found. Significant correlations were also found between total number of journals and mean impact factor (rs = 0.793, p = 0.006), between the total number of journals and the top impact factor (rs = 0.759, p = 0.011) and between the mean and the top impact factor (rs = 0.827, p = 0.003). CONCLUSIONS: The observations suggest that the number of journals available predict the top impact factor. For dermatology journals the top and the mean impact factor are predicted. This is in good agreement with theoretical expectations as more journals make more print-space available for more papers containing citations. It is suggested that new journals in dermatology should be encouraged, as this will most likely increase the impact factor of dermatological journals generally

    Discussing some basic critique on Journal Impact Factors: revision of earlier comments

    Get PDF
    In this study the issue of the validity of the argument against the applied length of citation windows in Journal Impact Factors calculations is critically re-analyzed. While previous studies argued against the relatively short citation window of 1–2 years, this study shows that the relative short term citation impact measured in the window underlying the Journal Impact Factor is a good predictor of the citation impact of the journals in the next years to come. Possible exceptions to this observation relate to journals with relatively low numbers of publications, and the citation impact related to publications in the year of publication. The study focuses on five Journal Subject Categories from the science and social sciences, on normal articles published in these journals, in the 2 years 2000 and 2004

    Articles by Latin American Authors in Prestigious Journals Have Fewer Citations

    Get PDF
    Background: the journal Impact factor (IF) is generally accepted to be a good measurement of the relevance/quality of articles that a journal publishes. in spite of an, apparently, homogenous peer-review process for a given journal, we hypothesize that the country affiliation of authors from developing Latin American (LA) countries affects the IF of a journal detrimentally.Methodology/Principal Findings: Seven prestigious international journals, one multidisciplinary journal and six serving specific branches of science, were examined in terms of their IF in the Web of Science. Two subsets of each journal were then selected to evaluate the influence of author's affiliation on the IF. They comprised contributions (i) with authorship from four Latin American (LA) countries (Argentina, Brazil, Chile and Mexico) and (ii) with authorship from five developed countries (England, France, Germany, Japan and USA). Both subsets were further subdivided into two groups: articles with authorship from one country only and collaborative articles with authorship from other countries. Articles from the five developed countries had IF close to the overall IF of the journals and the influence of collaboration on this value was minor. in the case of LA articles the effect of collaboration (virtually all with developed countries) was significant. the IFs for non-collaborative articles averaged 66% of the overall IF of the journals whereas the articles in collaboration raised the IFs to values close to the overall IF.Conclusion/Significance: the study shows a significantly lower IF in the group of the subsets of non-collaborative LA articles and thus that country affiliation of authors from non-developed LA countries does affect the IF of a journal detrimentally. There are no data to indicate whether the lower IFs of LA articles were due to their inherent inferior quality/relevance or psycho-social trend towards under-citation of articles from these countries. However, further study is required since there are foreseeable consequences of this trend as it may stimulate strategies by editors to turn down articles that tend to be under-cited.Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)Latin Amer & Caribbean Ctr Hlth Sci Informat, BIREME PAHO WHO, São Paulo, BrazilUniversidade Federal de São Paulo, DIS Dept Informat Med, São Paulo, BrazilUniversidade Federal de São Paulo, DIS Dept Informat Med, São Paulo, BrazilFAPESP: 05/57665-8CNPq: 2006-0919Web of Scienc

    The success-index: an alternative approach to the h-index for evaluating an individual's research output

    Get PDF
    Among the most recent bibliometric indicators for normalizing the differences among fields of science in terms of citation behaviour, Kosmulski (J Informetr 5(3):481-485, 2011) proposed the NSP (number of successful paper) index. According to the authors, NSP deserves much attention for its great simplicity and immediate meaning— equivalent to those of the h-index—while it has the disadvantage of being prone to manipulation and not very efficient in terms of statistical significance. In the first part of the paper, we introduce the success-index, aimed at reducing the NSP-index's limitations, although requiring more computing effort. Next, we present a detailed analysis of the success-index from the point of view of its operational properties and a comparison with the h-index's ones. Particularly interesting is the examination of the success-index scale of measurement, which is much richer than the h-index's. This makes success-index much more versatile for different types of analysis—e.g., (cross-field) comparisons of the scientific output of (1) individual researchers, (2) researchers with different seniority, (3) research institutions of different size, (4) scientific journals, etc

    A comment to the paper by Waltman et al., Scientometrics, 87, 467–481, 2011

    Get PDF
    In reaction to a previous critique (Opthof and Leydesdorff, J Informetr 4(3):423–430, 2010), the Center for Science and Technology Studies (CWTS) in Leiden proposed to change their old “crown” indicator in citation analysis into a new one. Waltman (Scientometrics 87:467–481, 2011a) argue that this change does not affect rankings at various aggregated levels. However, CWTS data is not publicly available for testing and criticism. Therefore, we comment by using previously published data of Van Raan (Scientometrics 67(3):491–502, 2006) to address the pivotal issue of how the results of citation analysis correlate with the results of peer review. A quality parameter based on peer review was neither significantly correlated with the two parameters developed by the CWTS in the past citations per paper/mean journal citation score (CPP/JCSm) or CPP/FCSm (citations per paper/mean field citation score) nor with the more recently proposed h-index (Hirsch, Proc Natl Acad Sci USA 102(46):16569–16572, 2005). Given the high correlations between the old and new “crown” indicators, one can expect that the lack of correlation with the peer-review based quality indicator applies equally to the newly developed ones

    ResearchGate versus Google Scholar: Which finds more early citations?

    Get PDF
    ResearchGate has launched its own citation index by extracting citations from documents uploaded to the site and reporting citation counts on article profile pages. Since authors may upload preprints to ResearchGate, it may use these to provide early impact evidence for new papers. This article assesses the whether the number of citations found for recent articles is comparable to other citation indexes using 2675 recently-published library and information science articles. The results show that in March 2017, ResearchGate found less citations than did Google Scholar but more than both Web of Science and Scopus. This held true for the dataset overall and for the six largest journals in it. ResearchGate correlated most strongly with Google Scholar citations, suggesting that ResearchGate is not predominantly tapping a fundamentally different source of data than Google Scholar. Nevertheless, preprint sharing in ResearchGate is substantial enough for authors to take seriously

    The substantive and practical significance of citation impact differences between institutions: Guidelines for the analysis of percentiles using effect sizes and confidence intervals

    Full text link
    In our chapter we address the statistical analysis of percentiles: How should the citation impact of institutions be compared? In educational and psychological testing, percentiles are already used widely as a standard to evaluate an individual's test scores - intelligence tests for example - by comparing them with the percentiles of a calibrated sample. Percentiles, or percentile rank classes, are also a very suitable method for bibliometrics to normalize citations of publications in terms of the subject category and the publication year and, unlike the mean-based indicators (the relative citation rates), percentiles are scarcely affected by skewed distributions of citations. The percentile of a certain publication provides information about the citation impact this publication has achieved in comparison to other similar publications in the same subject category and publication year. Analyses of percentiles, however, have not always been presented in the most effective and meaningful way. New APA guidelines (American Psychological Association, 2010) suggest a lesser emphasis on significance tests and a greater emphasis on the substantive and practical significance of findings. Drawing on work by Cumming (2012) we show how examinations of effect sizes (e.g. Cohen's d statistic) and confidence intervals can lead to a clear understanding of citation impact differences

    A Rejoinder on Energy versus Impact Indicators

    Get PDF
    Citation distributions are so skewed that using the mean or any other central tendency measure is ill-advised. Unlike G. Prathap's scalar measures (Energy, Exergy, and Entropy or EEE), the Integrated Impact Indicator (I3) is based on non-parametric statistics using the (100) percentiles of the distribution. Observed values can be tested against expected ones; impact can be qualified at the article level and then aggregated.Comment: Scientometrics, in pres

    Differences in citation frequency of clinical and basic science papers in cardiovascular research

    Get PDF
    In this article, a critical analysis is performed on differences in citation frequency of basic and clinical cardiovascular papers. It appears that the latter papers are cited at about 40% higher frequency. The differences between the largest number of citations of the most cited papers are even larger. It is also demonstrated that the groups of clinical and basic cardiovascular papers are also heterogeneous concerning citation frequency. It is concluded that none of the existing citation indicators appreciates these differences. At this moment these indicators should not be used for quality assessment of individual scientists and scientific niches with small numbers of scientists
    corecore