10 research outputs found

    Tweets vs. Mendeley readers: How do these two social media metrics differ?

    Get PDF
    A set of 1.4 million biomedical papers was analyzed with regards to how often articles are mentioned on Twitter or saved by users on Mendeley. While Twitter is a microblogging platform used by a general audience to distribute information, Mendeley is a reference manager targeted at an academic user group to organize scholarly literature. Both platforms are used as sources for so-called altmetrics to measure a new kind of research impact. This analysis shows in how far they differ and compare to traditional citation impact metrics based on a large set of PubMed papers.Published versio

    On the quest for currencies of science: Field "exchange rates" for citations and Mendeley readership

    Get PDF
    PurposeThe introduction of “altmetrics” as new tools to analyze scientific impact within the reward system of science has challenged the hegemony of citations as the predominant source for measuring scientific impact. Mendeley readership has been identified as one of the most important altmetric sources, with several features that are similar to citations. The purpose of this paper is to perform an in-depth analysis of the differences and similarities between the distributions of Mendeley readership and citations across fields.Design/methodology/approachThe authors analyze two issues by using in each case a common analytical framework for both metrics: the shape of the distributions of readership and citations, and the field normalization problem generated by differences in citation and readership practices across fields. In the first issue the authors use the characteristic scores and scales method, and in the second the measurement framework introduced in Crespo et al. (2013).FindingsThere are three main results. First, the citations and Mendeley readership distributions exhibit a strikingly similar degree of skewness in all fields. Second, the results on “exchange rates (ERs)” for Mendeley readership empirically supports the possibility of comparing readership counts across fields, as well as the field normalization of readership distributions using ERs as normalization factors. Third, field normalization using field mean readerships as normalization factors leads to comparably good results.Originality/valueThese findings open up challenging new questions, particularly regarding the possibility of obtaining conflicting results from field normalized citation and Mendeley readership indicators; this suggests the need for better determining the role of the two metrics in capturing scientific recognition.Merit, Expertise and Measuremen

    Identifying the Invisible Impact of Scholarly Publications: A Multi-Disciplinary Analysis Using Altmetrics

    Get PDF
    A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of Philosophy.The field of ‘altmetrics’ is concerned with alternative metrics for the impact of research publications using social web data. Empirical studies are needed, however, to assess the validity of altmetrics from different perspectives. This thesis partly fills this gap by exploring the suitability and reliability of two altmetrics resources: Mendeley, a social reference manager website, and Faculty of F1000 (F1000), a post- publishing peer review platform. This thesis explores the correlations between the new metrics and citations at the level of articles for several disciplines and investigates the contexts in which the new metrics can be useful for research evaluation across different fields. Low and medium correlations were found between Mendeley readership counts and citations for Social Sciences, Humanities, Medicine, Physics, Chemistry and Engineering articles from the Web of Science (WoS), suggesting that Mendeley data may reflect different aspects of research impact. A comparison between information flows based on Mendeley bookmarking data and cross-disciplinary citation analysis for social sciences and humanities disciplines revealed substantial similarities and some differences. This suggests that Mendeley readership data could be used to help identify knowledge transfer between scientific disciplines, especially for people that read but do not author articles, as well as providing evidence of impact at an earlier stage than is possible with citation counts. The majority of Mendeley readers for Clinical Medicine, Engineering and Technology, Social Science, Physics and Chemistry papers were PhD students and postdocs. The highest correlations between citations and Mendeley readership counts were for types of Mendeley users that often authored academic papers, suggesting that academics bookmark papers in Mendeley for reasons related to scientific publishing. In order to identify the extent to which Mendeley bookmarking counts reflect readership and to establish the motivations for bookmarking scientific papers in Mendeley, a large-scale survey found that 83% of Mendeley users read more than half of the papers in their personal libraries. The main reasons for bookmarking papers were citing in future publications, using in professional activities, citing in a thesis, and using in teaching and assignments. Thus, Mendeley bookmarking counts can potentially indicate the readership impact of research papers that have educational value for non-author users inside academia or the impact of research papers on practice for readers outside academia. This thesis also examines the relationship between article types (i.e., “New Finding”, “Confirmation”, “Clinical Trial”, “Technical Advance”, “Changes to Clinical Practice”, “Review”, “Refutation”, “Novel Drug Target”), citation counts and F1000 article factors (FFa). In seven out of nine cases, there were no significant differences between article types in terms of rankings based on citation counts and the F1000 Article Factor (FFa) scores. Nevertheless, citation counts and FFa scores were significantly different for articles tagged: “New finding” or “Changes to Clinical Practice”. This means that F1000 could be used in research evaluation exercises when the importance of practical findings needs to be recognised. Furthermore, since the majority of the studied articles were reviewed in their year of publication, F1000 could also be useful for quick evaluations

    Can web indicators be used to estimate the citation impact of conference papers in engineering?

    Get PDF
    A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of Philosophy.Although citation counts are widely used to support research evaluation, they can only reflect academic impacts, whereas research can also be useful outside academia. There is therefore a need for alternative indicators and empirical studies to evaluate them. Whilst many previous studies have investigated alternative indicators for journal articles and books, this thesis explores the importance and suitability of four web indicators for conference papers. These are readership counts from the online reference manager Mendeley and citation counts from Google Patents, Wikipedia and Google Books. To help evaluate these indicators for conference papers, correlations with Scopus citations were evaluated for each alternative indicator and compared with corresponding correlations between alternative indicators and citation counts for journal articles. Four subject areas that value conferences were chosen for the analysis: Computer Science Applications; Computer Software Engineering; Building & Construction Engineering; and Industrial & Manufacturing Engineering. There were moderate correlations between Mendeley readership counts and Scopus citation counts for both journal articles and conference papers in Computer Science Applications and Computer Software. For conference papers in Building & Construction Engineering and Industrial & Manufacturing Engineering, the correlations between Mendeley readers and citation counts are much lower than for journal articles. Thus, in fields where conferences are important, Mendeley readership counts are reasonable impact indicators for conference papers although they are better impact indicators for journal articles. Google Patent citations had low positive correlations with citation counts for both conference papers and journal articles in Software Engineering and Computer Science Applications. There were negative correlations for both conference papers and journal articles in Industrial and Manufacturing Engineering. However, conference papers in Building and Construction Engineering attracted no Google Patent citations. This suggests that there are disciplinary differences but little overall value for Google Patent citations as impact indicators in engineering fields valuing conferences. Wikipedia citations had correlations with Scopus citations that were statistically significantly positive only in Computer Science Applications, whereas the correlations were not statistically significantly different from zero in Building & Construction Engineering, Industrial & Manufacturing Engineering and Software Engineering. Conference papers were less likely to be cited in Wikipedia than journal articles were in all fields, although the difference was minor in Software Engineering. Thus, Wikipedia citations seem to have little value in engineering fields valuing conferences. Google Books citations had positive significant correlations with Scopus-indexed citations for conference papers in all fields except Building & Construction Engineering, where the correlations were not statistically significantly different from zero. Google Books citations seemed to be most valuable impact indicators in Computer Science Applications and Software Engineering, where the correlations were moderate, than in Industrial & Manufacturing Engineering, where the correlations were low. This means that Google Book citations are valuable indicators for conference papers in engineering fields valuing conferences. Although evidence from correlation tests alone is insufficient to judge the value of alternative indicators, the results suggest that Mendeley readers and Google Books citations may be useful for both journal articles and conference papers in engineering fields that value conferences, but not Wikipedia citations or Google Patent citations.Tetfund, Nigeri

    Theories of Informetrics and Scholarly Communication

    Get PDF
    Scientometrics have become an essential element in the practice and evaluation of science and research, including both the evaluation of individuals and national assessment exercises. Yet, researchers and practitioners in this field have lacked clear theories to guide their work. As early as 1981, then doctoral student Blaise Cronin published "The need for a theory of citing" —a call to arms for the fledgling scientometric community to produce foundational theories upon which the work of the field could be based. More than three decades later, the time has come to reach out the field again and ask how they have responded to this call. This book compiles the foundational theories that guide informetrics and scholarly communication research. It is a much needed compilation by leading scholars in the field that gathers together the theories that guide our understanding of authorship, citing, and impact

    Theories of Informetrics and Scholarly Communication

    Get PDF
    Scientometrics have become an essential element in the practice and evaluation of science and research, including both the evaluation of individuals and national assessment exercises. Yet, researchers and practitioners in this field have lacked clear theories to guide their work. As early as 1981, then doctoral student Blaise Cronin published The need for a theory of citing - a call to arms for the fledgling scientometric community to produce foundational theories upon which the work of the field could be based. More than three decades later, the time has come to reach out the field again and ask how they have responded to this call. This book compiles the foundational theories that guide informetrics and scholarly communication research. It is a much needed compilation by leading scholars in the field that gathers together the theories that guide our understanding of authorship, citing, and impact

    Theories of Informetrics and Scholarly Communication

    Get PDF
    Scientometrics have become an essential element in the practice and evaluation of science and research, including both the evaluation of individuals and national assessment exercises. Yet, researchers and practitioners in this field have lacked clear theories to guide their work. As early as 1981, then doctoral student Blaise Cronin published "The need for a theory of citing" —a call to arms for the fledgling scientometric community to produce foundational theories upon which the work of the field could be based. More than three decades later, the time has come to reach out the field again and ask how they have responded to this call. This book compiles the foundational theories that guide informetrics and scholarly communication research. It is a much needed compilation by leading scholars in the field that gathers together the theories that guide our understanding of authorship, citing, and impact
    corecore