144,061 research outputs found
A systematic empirical comparison of different approaches for normalizing citation impact indicators
We address the question how citation-based bibliometric indicators can best
be normalized to ensure fair comparisons between publications from different
scientific fields and different years. In a systematic large-scale empirical
analysis, we compare a traditional normalization approach based on a field
classification system with three source normalization approaches. We pay
special attention to the selection of the publications included in the
analysis. Publications in national scientific journals, popular scientific
magazines, and trade magazines are not included. Unlike earlier studies, we use
algorithmically constructed classification systems to evaluate the different
normalization approaches. Our analysis shows that a source normalization
approach based on the recently introduced idea of fractional citation counting
does not perform well. Two other source normalization approaches generally
outperform the classification-system-based normalization approach that we
study. Our analysis therefore offers considerable support for the use of
source-normalized bibliometric indicators
Towards a new crown indicator: Some theoretical considerations
The crown indicator is a well-known bibliometric indicator of research
performance developed by our institute. The indicator aims to normalize
citation counts for differences among fields. We critically examine the
theoretical basis of the normalization mechanism applied in the crown
indicator. We also make a comparison with an alternative normalization
mechanism. The alternative mechanism turns out to have more satisfactory
properties than the mechanism applied in the crown indicator. In particular,
the alternative mechanism has a so-called consistency property. The mechanism
applied in the crown indicator lacks this important property. As a consequence
of our findings, we are currently moving towards a new crown indicator, which
relies on the alternative normalization mechanism
A review of the literature on citation impact indicators
Citation impact indicators nowadays play an important role in research
evaluation, and consequently these indicators have received a lot of attention
in the bibliometric and scientometric literature. This paper provides an
in-depth review of the literature on citation impact indicators. First, an
overview is given of the literature on bibliographic databases that can be used
to calculate citation impact indicators (Web of Science, Scopus, and Google
Scholar). Next, selected topics in the literature on citation impact indicators
are reviewed in detail. The first topic is the selection of publications and
citations to be included in the calculation of citation impact indicators. The
second topic is the normalization of citation impact indicators, in particular
normalization for field differences. Counting methods for dealing with
co-authored publications are the third topic, and citation impact indicators
for journals are the last topic. The paper concludes by offering some
recommendations for future research
The effect of public funding on research output: the New Zealand Marsden Fund
The Marsden Fund is the premiere funding mechanism for blue skies research in New Zealand. In 2014, $56 million was awarded to 101 research projects chosen from among 1222 applications from researchers at universities, Crown Research Institutes and independent research organizations. This funding mechanism is similar to those in other countries, such as the European Research Council.
This research measures the effect of funding receipt from the New Zealand Marsden Fund using a unique dataset of funded and unfunded proposals that includes the evaluation scores assigned to all proposals. This allows us to control statistically for potential bias driven by the Fundâs efforts to fund projects that are expected to be successful, and also to measure the efficacy of the selection process itself. We find that Marsden Funding does increase the scientific output of the funded researchers, but that there is no evidence that the final selection process is able to meaningfully predict the likely success of different proposals
Comparing teachersâ assessments and national test results â evidence from Sweden
This study compares results on national tests with teachersâ assessment of student performance, by using Swedish data of grade 9 students (16 years old). I examine whether there are systematic differences correlated with gender and ethnic background. That is, if the relationship between school leaving certificates and national test results differs between girls and boys or between natives and non-natives. The results show that girls are more generously rewarded in teachersâ assessment compared to test results in all three subjects studied. Non-native students are more generously rewarded in teachersâ assessment compared to test results in two out of three subjects studied.School performance; gender; race
Universality of citation distributions revisited
Radicchi, Fortunato, and Castellano [arXiv:0806.0974, PNAS 105(45), 17268]
claim that, apart from a scaling factor, all fields of science are
characterized by the same citation distribution. We present a large-scale
validation study of this universality-of-citation-distributions claim. Our
analysis shows that claiming citation distributions to be universal for all
fields of science is not warranted. Although many fields indeed seem to have
fairly similar citation distributions, there are quite some exceptions as well.
We also briefly discuss the consequences of our findings for the measurement of
scientific impact using citation-based bibliometric indicators
The relation between Eigenfactor, audience factor, and influence weight
We present a theoretical and empirical analysis of a number of bibliometric
indicators of journal performance. We focus on three indicators in particular,
namely the Eigenfactor indicator, the audience factor, and the influence weight
indicator. Our main finding is that the last two indicators can be regarded as
a kind of special cases of the first indicator. We also find that the three
indicators can be nicely characterized in terms of two properties. We refer to
these properties as the property of insensitivity to field differences and the
property of insensitivity to insignificant journals. The empirical results that
we present illustrate our theoretical findings. We also show empirically that
the differences between various indicators of journal performance are quite
substantial
- âŠ