142,865 research outputs found

    Do altmetrics correlate with the quality of papers? A large-scale empirical study based on F1000Prime data

    Full text link
    In this study, we address the question whether (and to what extent, respectively) altmetrics are related to the scientific quality of papers (as measured by peer assessments). Only a few studies have previously investigated the relationship between altmetrics and assessments by peers. In the first step, we analyse the underlying dimensions of measurement for traditional metrics (citation counts) and altmetrics - by using principal component analysis (PCA) and factor analysis (FA). In the second step, we test the relationship between the dimensions and quality of papers (as measured by the post-publication peer-review system of F1000Prime assessments) - using regression analysis. The results of the PCA and FA show that altmetrics operate along different dimensions, whereas Mendeley counts are related to citation counts, and tweets form a separate dimension. The results of the regression analysis indicate that citation-based metrics and readership counts are significantly more related to quality, than tweets. This result on the one hand questions the use of Twitter counts for research evaluation purposes and on the other hand indicates potential use of Mendeley reader counts

    Applied Evaluative Informetrics: Part 1

    Full text link
    This manuscript is a preprint version of Part 1 (General Introduction and Synopsis) of the book Applied Evaluative Informetrics, to be published by Springer in the summer of 2017. This book presents an introduction to the field of applied evaluative informetrics, and is written for interested scholars and students from all domains of science and scholarship. It sketches the field's history, recent achievements, and its potential and limits. It explains the notion of multi-dimensional research performance, and discusses the pros and cons of 28 citation-, patent-, reputation- and altmetrics-based indicators. In addition, it presents quantitative research assessment as an evaluation science, and focuses on the role of extra-informetric factors in the development of indicators, and on the policy context of their application. It also discusses the way forward, both for users and for developers of informetric tools.Comment: The posted version is a preprint (author copy) of Part 1 (General Introduction and Synopsis) of a book entitled Applied Evaluative Bibliometrics, to be published by Springer in the summer of 201

    Enhancing the Impact of Cross-Sector Partnerships. Four Impact Loops for Channeling Partnership Studies

    Get PDF
    This paper addresses the topic of this special symposium issue: how to enhance the impact of cross-sector partnerships. The paper takes stock of two related discussions: the discourse in cross-sector partnership research on how to assess impact and the discourse in impact assessment research on how to deal with more complex organizations and projects. We argue that there is growing need and recognition for cross-fertilization between the two areas. Cross-sector partnerships are reaching a paradigmatic status in society, but both research and practice need more thorough evidence of their impacts and of the conditions under which these impacts can be enhanced. This paper develops a framework that should enable a constructive interchange between the two research areas, while also framing existing research into more precise categories that can lead to knowledge accumulation. We address the preconditions for such a framework and discuss how the constituent parts of this framework interact. We distinguish four different pathways or impact loops that refer to four distinct orders of impact. The paper concludes by applying these insights to the four papers included in this special issue
    • …
    corecore