666 research outputs found

    Mendeley reader counts for US computer science conference papers and journal articles

    Get PDF
    © 2020 The Authors. Published by MIT Press. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: https://direct.mit.edu/qss/article/1/1/347/15566/Mendeley-reader-counts-for-US-computer-scienceAlthough bibliometrics are normally applied to journal articles when used to support research evaluations, conference papers are at least as important in fast-moving computingrelated fields. It is therefore important to assess the relative advantages of citations and altmetrics for computing conference papers to make an informed decision about which, if any, to use. This paper compares Scopus citations with Mendeley reader counts for conference papers and journal articles that were published between 1996 and 2018 in 11 computing fields and had at least one US author. The data showed high correlations between Scopus citation counts and Mendeley reader counts in all fields and most years, but with few Mendeley readers for older conference papers and few Scopus citations for new conference papers and journal articles. The results therefore suggest that Mendeley reader counts have a substantial advantage over citation counts for recently-published conference papers due to their greater speed, but are unsuitable for older conference papers

    Networks of reader and country status: An analysis of Mendeley reader statistics

    Get PDF
    The number of papers published in journals indexed by the Web of Science core collection is steadily increasing. In recent years, nearly two million new papers were published each year; somewhat more than one million papers when primary research papers are considered only (articles and reviews are the document types where primary research is usually reported or reviewed). However, who reads these papers? More precisely, which groups of researchers from which (self-assigned) scientific disciplines and countries are reading these papers? Is it possible to visualize readership patterns for certain countries, scientific disciplines, or academic status groups? One popular method to answer these questions is a network analysis. In this study, we analyze Mendeley readership data of a set of 1,133,224 articles and 64,960 reviews with publication year 2012 to generate three different kinds of networks: (1) The network based on disciplinary affiliations of Mendeley readers contains four groups: (i) biology, (ii) social science and humanities (including relevant computer science), (iii) bio-medical sciences, and (iv) natural science and engineering. In all four groups, the category with the addition "miscellaneous" prevails. (2) The network of co-readers in terms of professional status shows that a common interest in papers is mainly shared among PhD students, Master's students, and postdocs. (3) The country network focusses on global readership patterns: a group of 53 nations is identified as core to the scientific enterprise, including Russia and China as well as two thirds of the OECD (Organisation for Economic Co-operation and Development) countries.Comment: 26 pages, 6 figures (also web-based startable), and 2 table

    The pros and cons of the use of altmetrics in research assessment

    Get PDF
    © 2020 The Authors. Published by Levi Library Press. This is an open access article available under a Creative Commons licence. The published version can be accessed at the following link on the publisher’s website: http://doi.org/10.29024/sar.10Many indicators derived from the web have been proposed to supplement citation-based indicators in support of research assessments. These indicators, often called altmetrics, are available commercially from Altmetric.com and Elsevier’s Plum Analytics or can be collected directly. These organisations can also deliver altmetrics to support institutional selfevaluations. The potential advantages of altmetrics for research evaluation are that they may reflect important non-academic impacts and may appear before citations when an article is published, thus providing earlier impact evidence. Their disadvantages often include susceptibility to gaming, data sparsity, and difficulties translating the evidence into specific types of impact. Despite these limitations, altmetrics have been widely adopted by publishers, apparently to give authors, editors and readers insights into the level of interest in recently published articles. This article summarises evidence for and against extending the adoption of altmetrics to research evaluations. It argues that whilst systematicallygathered altmetrics are inappropriate for important formal research evaluations, they can play a role in some other contexts. They can be informative when evaluating research units that rarely produce journal articles, when seeking to identify evidence of novel types of impact during institutional or other self-evaluations, and when selected by individuals or groups to support narrative-based non-academic claims. In addition, Mendeley reader counts are uniquely valuable as early (mainly) scholarly impact indicators to replace citations when gaming is not possible and early impact evidence is needed. Organisations using alternative indicators need recruit or develop in-house expertise to ensure that they are not misused, however

    Do Mendeley reader counts reflect the scholarly impact of conference papers? An investigation of computer science and engineering

    Get PDF
    This is an accepted manuscript of an article published by Springer in Scientometrics on 13/04/2017, available online: https://doi.org/10.1007/s11192-017-2367-1 The accepted version of the publication may differ from the final published version.Counts of Mendeley readers may give useful evidence about the impact of published re-search. Although previous studies have found significant positive correlations between counts of Mendeley readers and citation counts for journal articles, it is not known if this is equally true for conference papers. To fill this gap, Mendeley readership data and Scopus citation counts were extracted for both journal articles and conference papers published in 2011 in four fields for which conferences are important: Computer Science Applications; Computer Software; Building & Construction Engineering; and Industrial & Manufacturing Engineer-ing. Mendeley readership counts correlated moderately with citation counts for both journal articles and conference papers in Computer Science Applications and Computer Software. The correlations were much lower between Mendeley readers and citation counts for confer-ence papers than for journal articles in Building & Construction Engineering and Industrial & Manufacturing Engineering. Hence, there seem to be disciplinary differences in the useful-ness of Mendeley readership counts as impact indicators for conference papers, even between fields for which conferences are important

    Mendeley Readership Count: An Investigation of Sambalpur University Publications from 1971-2018

    Get PDF
    Mendeley offers readership statistic to publications and use these readership statistics to evaluate research performance of an individual. The primary purpose of this paper is to investigate the Mendeley readership counts of Sambalpur University\u27s publications from 1971 to 2018. In this study; bibliographical data exported from Scopus using affiliations search tab and exported data between1971 to 2018. A total of 1553 records were found. The exported data converted into a text file and run in Webometric Analyst software and exported the Mendeley readership data from Mendeley website. A total 1399 record existed in the Mendeley database, in which 173 data have no readership found and further, 1226 publications data analyzed. The readership statistics of Sambalpur University have no impressive growth. Further study found that the yearly growth of Mendeley readership was not stable, and it fluctuated over time. There were positive 0.3303 correlations between Scopus citation and Mendeley readership of the published papers. Mendeley readership statistics by country found that most of the readers are from India, followed by the United States

    Microsoft Academic: A multidisciplinary comparison of citation counts with Scopus and Mendeley for 29 journals

    Get PDF
    Microsoft Academic is a free citation index that allows large scale data collection. This combination makes it useful for scientometric research. Previous studies have found that its citation counts tend to be slightly larger than those of Scopus but smaller than Google Scholar, with disciplinary variations. This study reports the largest and most systematic analysis so far, of 172,752 articles in 29 large journals chosen from different specialisms. From Scopus citation counts, Microsoft Academic citation counts and Mendeley reader counts for articles published 2007-2017, Microsoft Academic found a slightly more (6%) citations than Scopus overall and especially for the current year (51%). It found fewer citations than Mendeley readers overall (59%), and only 7% as many for the current year. Differences between journals were probably due to field preprint sharing cultures or journal policies rather than broad disciplinary differences

    Do citations and readership identify seminal publications?

    Get PDF
    This work presents a new approach for analysing the ability of existing research metrics to identify research which has strongly influenced future developments. More specifically, we focus on the ability of citation counts and Mendeley reader counts to distinguish between publications regarded as seminal and publications regarded as literature reviews by field experts. The main motivation behind our research is to gain a better understanding of whether and how well the existing research metrics relate to research quality. For this experiment we have created a new dataset which we call TrueImpactDataset and which contains two types of publications, seminal papers and literature reviews. Using the dataset, we conduct a set of experiments to study how citation and reader counts perform in distinguishing these publication types, following the intuition that causing a change in a field signifies research quality. Our research shows that citation counts work better than a random baseline (by a margin of 10%) in distinguishing important seminal research papers from literature reviews while Mendeley reader counts do not work better than the baseline
    corecore