6 research outputs found

    Does Mendeley provide evidence of the educational value of journal articles?

    Get PDF
    This is an accepted manuscript of an article published by Wiley-Blackwell in Learned Publishing on 07/12/2016, available online: https://doi.org/10.1002/leap.1076 The accepted version of the publication may differ from the final published version.Research articles seem to have direct value for students in some subject areas, even though scholars may be their target audience. If this can be proven to be true, then subject areas with this type of educational impact could justify claims for enhanced funding. To seek evidence of disciplinary differences in the direct educational uptake of journal articles, but ignoring books, conference papers, and other scholarly outputs, this paper assesses the total number and proportions of student readers of academic articles in Mendeley across 12 different subjects. The results suggest that whilst few students read mathematics research articles, in other areas, the number of student readers is broadly proportional to the number of research readers. Although the differences in the average numbers of undergraduate readers of articles varies by up to 50 times between subjects, this could be explained by the differing levels of uptake of Mendeley rather than the differing educational value of disciplinary research. Overall, then, the results do not support the claim that journal articles in some areas have substantially more educational value than average for academia, compared with their research value

    The pros and cons of the use of altmetrics in research assessment

    Get PDF
    © 2020 The Authors. Published by Levi Library Press. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: http://doi.org/10.29024/sar.10Many indicators derived from the web have been proposed to supplement citation-based indicators in support of research assessments. These indicators, often called altmetrics, are available commercially from Altmetric.com and Elsevier’s Plum Analytics or can be collected directly. These organisations can also deliver altmetrics to support institutional selfevaluations. The potential advantages of altmetrics for research evaluation are that they may reflect important non-academic impacts and may appear before citations when an article is published, thus providing earlier impact evidence. Their disadvantages often include susceptibility to gaming, data sparsity, and difficulties translating the evidence into specific types of impact. Despite these limitations, altmetrics have been widely adopted by publishers, apparently to give authors, editors and readers insights into the level of interest in recently published articles. This article summarises evidence for and against extending the adoption of altmetrics to research evaluations. It argues that whilst systematicallygathered altmetrics are inappropriate for important formal research evaluations, they can play a role in some other contexts. They can be informative when evaluating research units that rarely produce journal articles, when seeking to identify evidence of novel types of impact during institutional or other self-evaluations, and when selected by individuals or groups to support narrative-based non-academic claims. In addition, Mendeley reader counts are uniquely valuable as early (mainly) scholarly impact indicators to replace citations when gaming is not possible and early impact evidence is needed. Organisations using alternative indicators need recruit or develop in-house expertise to ensure that they are not misused, however

    Do Mendeley reader counts indicate the value of arts and humanities research?

    Get PDF
    This is an accepted manuscript of an article published by Sage in Journal of Librarianship and Information Science on 19/09/2017, available online: https://doi.org/10.1177/0961000617732381 The accepted version of the publication may differ from the final published version.Mendeley reader counts are a good source of early impact evidence for the life and natural sciences articles because they are abundant, appear before citations, and correlate moderately or strongly with citations in the long term. Early studies have found less promising results for the humanities and this article assesses whether the situation has now changed. Using Mendeley reader counts for articles in twelve arts and humanities Scopus subcategories, the results show that Mendeley reader counts reflect Scopus citation counts in most arts and humanities as strongly as in other areas of scholarship. Thus, Mendeley can be used as an early citation impact indicator in the arts and humanities, although it is unclear whether reader or citation counts reflect the underlying value of arts and humanities research

    Mendeley reader counts for US computer science conference papers and journal articles

    Get PDF
    © 2020 The Authors. Published by MIT Press. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: https://direct.mit.edu/qss/article/1/1/347/15566/Mendeley-reader-counts-for-US-computer-scienceAlthough bibliometrics are normally applied to journal articles when used to support research evaluations, conference papers are at least as important in fast-moving computingrelated fields. It is therefore important to assess the relative advantages of citations and altmetrics for computing conference papers to make an informed decision about which, if any, to use. This paper compares Scopus citations with Mendeley reader counts for conference papers and journal articles that were published between 1996 and 2018 in 11 computing fields and had at least one US author. The data showed high correlations between Scopus citation counts and Mendeley reader counts in all fields and most years, but with few Mendeley readers for older conference papers and few Scopus citations for new conference papers and journal articles. The results therefore suggest that Mendeley reader counts have a substantial advantage over citation counts for recently-published conference papers due to their greater speed, but are unsuitable for older conference papers
    corecore