234,873 research outputs found
A systematic empirical comparison of different approaches for normalizing citation impact indicators
We address the question how citation-based bibliometric indicators can best
be normalized to ensure fair comparisons between publications from different
scientific fields and different years. In a systematic large-scale empirical
analysis, we compare a traditional normalization approach based on a field
classification system with three source normalization approaches. We pay
special attention to the selection of the publications included in the
analysis. Publications in national scientific journals, popular scientific
magazines, and trade magazines are not included. Unlike earlier studies, we use
algorithmically constructed classification systems to evaluate the different
normalization approaches. Our analysis shows that a source normalization
approach based on the recently introduced idea of fractional citation counting
does not perform well. Two other source normalization approaches generally
outperform the classification-system-based normalization approach that we
study. Our analysis therefore offers considerable support for the use of
source-normalized bibliometric indicators
A review of the literature on citation impact indicators
Citation impact indicators nowadays play an important role in research
evaluation, and consequently these indicators have received a lot of attention
in the bibliometric and scientometric literature. This paper provides an
in-depth review of the literature on citation impact indicators. First, an
overview is given of the literature on bibliographic databases that can be used
to calculate citation impact indicators (Web of Science, Scopus, and Google
Scholar). Next, selected topics in the literature on citation impact indicators
are reviewed in detail. The first topic is the selection of publications and
citations to be included in the calculation of citation impact indicators. The
second topic is the normalization of citation impact indicators, in particular
normalization for field differences. Counting methods for dealing with
co-authored publications are the third topic, and citation impact indicators
for journals are the last topic. The paper concludes by offering some
recommendations for future research
Large-Scale Analysis of the Accuracy of the Journal Classification Systems of Web of Science and Scopus
Journal classification systems play an important role in bibliometric
analyses. The two most important bibliographic databases, Web of Science and
Scopus, each provide a journal classification system. However, no study has
systematically investigated the accuracy of these classification systems. To
examine and compare the accuracy of journal classification systems, we define
two criteria on the basis of direct citation relations between journals and
categories. We use Criterion I to select journals that have weak connections
with their assigned categories, and we use Criterion II to identify journals
that are not assigned to categories with which they have strong connections. If
a journal satisfies either of the two criteria, we conclude that its assignment
to categories may be questionable. Accordingly, we identify all journals with
questionable classifications in Web of Science and Scopus. Furthermore, we
perform a more in-depth analysis for the field of Library and Information
Science to assess whether our proposed criteria are appropriate and whether
they yield meaningful results. It turns out that according to our
citation-based criteria Web of Science performs significantly better than
Scopus in terms of the accuracy of its journal classification system
Brief communication: Gender differences in publication and citation counts in librarianship and information science research
An analysis is presented of the publications by, and citations to, 57 male and 48 female academics in five departments of librarianship and information science. After taking account of differences in subject and differences in numbers of academics, it is shown that male academics publish significantly more papers on average than do female authors, but that there is no significant difference in the numbers of citations to published papers
- …