2,808 research outputs found
Scopus's Source Normalized Impact per Paper (SNIP) versus a Journal Impact Factor based on Fractional Counting of Citations
Impact factors (and similar measures such as the Scimago Journal Rankings)
suffer from two problems: (i) citation behavior varies among fields of science
and therefore leads to systematic differences, and (ii) there are no statistics
to inform us whether differences are significant. The recently introduced SNIP
indicator of Scopus tries to remedy the first of these two problems, but a
number of normalization decisions are involved which makes it impossible to
test for significance. Using fractional counting of citations-based on the
assumption that impact is proportionate to the number of references in the
citing documents-citations can be contextualized at the paper level and
aggregated impacts of sets can be tested for their significance. It can be
shown that the weighted impact of Annals of Mathematics (0.247) is not so much
lower than that of Molecular Cell (0.386) despite a five-fold difference
between their impact factors (2.793 and 13.156, respectively)
Some modifications to the SNIP journal impact indicator
The SNIP (source normalized impact per paper) indicator is an indicator of
the citation impact of scientific journals. The indicator, introduced by Henk
Moed in 2010, is included in Elsevier's Scopus database. The SNIP indicator
uses a source normalized approach to correct for differences in citation
practices between scientific fields. The strength of this approach is that it
does not require a field classification system in which the boundaries of
fields are explicitly defined. In this paper, a number of modifications that
will be made to the SNIP indicator are explained, and the advantages of the
resulting revised SNIP indicator are pointed out. It is argued that the
original SNIP indicator has some counterintuitive properties, and it is shown
mathematically that the revised SNIP indicator does not have these properties.
Empirically, the differences between the original SNIP indicator and the
revised one turn out to be relatively small, although some systematic
differences can be observed. Relations with other source normalized indicators
proposed in the literature are discussed as well
The revised SNIP indicator of Elsevier's Scopus
The modified SNIP indicator of Elsevier, as recently explained by Waltman et
al. (2013) in this journal, solves some of the problems which Leydesdorff &
Opthof (2010 and 2011) indicated in relation to the original SNIP indicator
(Moed, 2010 and 2011). The use of an arithmetic average, however, remains
unfortunate in the case of scientometric distributions because these can be
extremely skewed (Seglen, 1992 and 1997). The new indicator cannot (or hardly)
be reproduced independently when used for evaluation purposes, and remains in
this sense opaque from the perspective of evaluated units and scholars.Comment: Letter to the Editor of the Journal of Informetrics (2013; in press
A Review of Theory and Practice in Scientometrics
Scientometrics is the study of the quantitative aspects of the process of science as a communication system. It is centrally, but not only, concerned with the analysis of citations in the academic literature. In recent years it has come to play a major role in the measurement and evaluation of research performance. In this review we consider: the historical development of scientometrics, sources of citation data, citation metrics and the âlaws" of scientometrics, normalisation, journal impact factors and other journal metrics, visualising and mapping science, evaluation and policy, and future developments
Identifying Research Fields within Business and Management: A Journal Cross-Citation Analysis
A discipline such as business and management (B&M) is very broad and has many fields within it, ranging from fairly scientific ones such as management science or economics to softer ones such as information systems. There are at least three reasons why it is important to identify these sub-fields accurately. Firstly, to give insight into the structure of the subject area and identify perhaps unrecognised commonalities; second for the purpose of normalizing citation data as it is well known that citation rates vary significantly between different disciplines. And thirdly, because journal rankings and lists tend to split their classifications into different subjects â for example, the Association of Business Schools (ABS) list, which is a standard in the UK, has 22 different fields. Unfortunately, at the moment these are created in an ad hoc manner with no underlying rigour. The purpose of this paper is to identify possible sub-fields in B&M rigorously based on actual citation patterns. We have examined 450 journals in B&M which are included in the ISI Web of Science (WoS) and analysed the cross-citation rates between them enabling us to generate sets of coherent and consistent sub-fields that minimise the extent to which journals appear in several categories. Implications and limitations of the analysis are discussed
A systematic empirical comparison of different approaches for normalizing citation impact indicators
We address the question how citation-based bibliometric indicators can best
be normalized to ensure fair comparisons between publications from different
scientific fields and different years. In a systematic large-scale empirical
analysis, we compare a traditional normalization approach based on a field
classification system with three source normalization approaches. We pay
special attention to the selection of the publications included in the
analysis. Publications in national scientific journals, popular scientific
magazines, and trade magazines are not included. Unlike earlier studies, we use
algorithmically constructed classification systems to evaluate the different
normalization approaches. Our analysis shows that a source normalization
approach based on the recently introduced idea of fractional citation counting
does not perform well. Two other source normalization approaches generally
outperform the classification-system-based normalization approach that we
study. Our analysis therefore offers considerable support for the use of
source-normalized bibliometric indicators
- âŚ