150 research outputs found

    How to make altmetrics useful in societal impact assessments: shifting from citation to interaction approaches

    Get PDF
    The suitability of altmetrics for use in assessments of societal impact has been questioned by certain recent studies. Ismael Ràfols, Nicolas Robinson-García and Thed N. van Leeuwen propose that, rather than mimicking citation-based approaches to scientific impact evaluation, assessments of societal impact should be aimed at learning rather than auditing, and focused on understanding the engagement approaches that lead to impact. When using altmetric data for societal impact assessment, greater value might be derived from adopting “interaction approaches” to analyse engagement networks among researchers and stakeholders. Experimental analyses using data from Twitter are presented here to illustrate such an approach

    Towards a new crown indicator: Some theoretical considerations

    Get PDF
    The crown indicator is a well-known bibliometric indicator of research performance developed by our institute. The indicator aims to normalize citation counts for differences among fields. We critically examine the theoretical basis of the normalization mechanism applied in the crown indicator. We also make a comparison with an alternative normalization mechanism. The alternative mechanism turns out to have more satisfactory properties than the mechanism applied in the crown indicator. In particular, the alternative mechanism has a so-called consistency property. The mechanism applied in the crown indicator lacks this important property. As a consequence of our findings, we are currently moving towards a new crown indicator, which relies on the alternative normalization mechanism

    Some modifications to the SNIP journal impact indicator

    Get PDF
    The SNIP (source normalized impact per paper) indicator is an indicator of the citation impact of scientific journals. The indicator, introduced by Henk Moed in 2010, is included in Elsevier's Scopus database. The SNIP indicator uses a source normalized approach to correct for differences in citation practices between scientific fields. The strength of this approach is that it does not require a field classification system in which the boundaries of fields are explicitly defined. In this paper, a number of modifications that will be made to the SNIP indicator are explained, and the advantages of the resulting revised SNIP indicator are pointed out. It is argued that the original SNIP indicator has some counterintuitive properties, and it is shown mathematically that the revised SNIP indicator does not have these properties. Empirically, the differences between the original SNIP indicator and the revised one turn out to be relatively small, although some systematic differences can be observed. Relations with other source normalized indicators proposed in the literature are discussed as well

    Rivals for the crown: Reply to Opthof and Leydesdorff

    Get PDF
    We reply to the criticism of Opthof and Leydesdorff [arXiv:1002.2769] on the way in which our institute applies journal and field normalizations to citation counts. We point out why we believe most of the criticism is unjustified, but we also indicate where we think Opthof and Leydesdorff raise a valid point

    The consequences of paying to publish

    Get PDF
    Open Access publishing has been the most prolific aspect of the transition towards open science. In this transition, increasingly national governments, national and international funding agencies and institutional leadership have initiated policies to promote and stimulate the development to open access as the norm in scholarly publishing. However, this has not always led to the best outcomes

    The Leiden Ranking 2011/2012: Data collection, indicators, and interpretation

    Get PDF
    The Leiden Ranking 2011/2012 is a ranking of universities based on bibliometric indicators of publication output, citation impact, and scientific collaboration. The ranking includes 500 major universities from 41 different countries. This paper provides an extensive discussion of the Leiden Ranking 2011/2012. The ranking is compared with other global university rankings, in particular the Academic Ranking of World Universities (commonly known as the Shanghai Ranking) and the Times Higher Education World University Rankings. Also, a detailed description is offered of the data collection methodology of the Leiden Ranking 2011/2012 and of the indicators used in the ranking. Various innovations in the Leiden Ranking 2011/2012 are presented. These innovations include (1) an indicator based on counting a university's highly cited publications, (2) indicators based on fractional rather than full counting of collaborative publications, (3) the possibility of excluding non-English language publications, and (4) the use of stability intervals. Finally, some comments are made on the interpretation of the ranking, and a number of limitations of the ranking are pointed out
    corecore