9 research outputs found

    Do Mendeley reader counts indicate the value of arts and humanities research?

    Get PDF
    This is an accepted manuscript of an article published by Sage in Journal of Librarianship and Information Science on 19/09/2017, available online: https://doi.org/10.1177/0961000617732381 The accepted version of the publication may differ from the final published version.Mendeley reader counts are a good source of early impact evidence for the life and natural sciences articles because they are abundant, appear before citations, and correlate moderately or strongly with citations in the long term. Early studies have found less promising results for the humanities and this article assesses whether the situation has now changed. Using Mendeley reader counts for articles in twelve arts and humanities Scopus subcategories, the results show that Mendeley reader counts reflect Scopus citation counts in most arts and humanities as strongly as in other areas of scholarship. Thus, Mendeley can be used as an early citation impact indicator in the arts and humanities, although it is unclear whether reader or citation counts reflect the underlying value of arts and humanities research

    Study on Scientific outputs of Scholars in the Field of Digital Libraries Using Altmetrics Indicators

    Get PDF
    The current study aims to calculate the relationship between Altmetric scores obtained from the observation and dissemination of digital library resources in the Dimensions database and the number of citations received in the Scopus database. Also, in another part of the research, the predictive power of the number of Scopus citations by Altmetric scores is examined. The present research is applied in terms of purpose and survey-descriptive in terms of type, which is done by the scientometric method and with an Altmetric approach. The statistical population of the study includes all articles in the field of digital libraries (24183 records) that are indexed in the Scopus citation database during 1960-2020. Dimensions database has been used to evaluate the Altmetric scores obtained from these articles on social networks. Due to the limited access to the required data in the Scopus database, 2000 highly cited articles in the field of digital libraries in this Scopus database were studied through the Dimensions database. The data collection tools are Scopus Citation Database and Dimensions Database. The required data is collected through the Scopus database. In this study, the studied indicators from the Dimensions database appear as the independent variable of the research. The dependent variables in this study are the number of citations to articles in the Scopus database. Correlation tests and multiple regression between the studied indices are used to examine the relationships between variables and perform statistical tests. The software used is Excel and SPSS version 23. The present study results show that the social networks Patent, Facebook, Wikipedia, and Twitter have the highest correlation with the number of citations in the Dimensions database. The social networks Blog, Google User, and Q&A do not significantly relate to the number of citations received in Dimensions. Patent social networks, Wikipedia, and Twitter have the highest correlation with the number of Scopus citations. In this case, the social networks of Blog, Google User, Pulse Source and Q&A do not significantly correlate with the number of citations received. Among the citation databases studied, Mendeley has the highest correlation between the numbers of citations. Other results indicate that the publication and viewing of documents on social networks cannot predict the number of citations in the Dimensions and Scopus databases.https://dorl.net/dor/20.1001.1.20088302.2022.20.4.10.

    Examining the Association between Citations and Altmetric Indicators of LIS Articles indexed in Dimensions Database

    Get PDF
    Social media attention to scholarly articles has become a novel measure for assessing the broader impact of research, which complements the traditional citation metrics. This article examined the correlation between citations with major altmetric indicators for 1951 LIS articles published in 2020. Altmetric Explorer was used for collecting the data, and analysis was done using Excel and SPSS. The result showed that LIS articles were well engaged on social media platforms gaining more societal attention than their scientific reference in terms of citations. Mendeley (69.40%) and Twitter (28.72%) were the top intakes of LIS articles, and Pinterest (0.001%) and F1000 (0.001%) were the least ones. The users from the USA were the major Twitterati for the LIS articles, with average Tweeters of -0.58 across the globe. The users from the UK were the top mentioner of the articles on Facebook (2.7%), while the USA was on the News and mainstream media (55.6%). Except for Peer review (r= -0.05), all other altimetric indicators were positively associated with Dimensions citations. The study's findings allow the authors to analyze the societal impact of their scholarship through altmetric indicators and use altmetric indicators as supplementary to the citation metrics for measuring the immediate impact of the LIS scientific outputs

    An Assessment of Impact Metrics’ Potential as Research Indicators Based on Their Perception, Usage, and Dependencies from External Science Communication

    Get PDF
    The demand for practicable methods for quantitative assessments of scientific products’ relevance has risen considerably over the past decades. As a consequence, research and commercial providers of scholarly data developed a wide variety of impact indicators, ranging from citation-based to so-called altmetrics. This highly heterogeneous family of indicators is based on the principle of measuring interactions with scientific publications that are observable online, and covers for instance mentions of publications in social and journalistic media, in literature management software, or in policy documents. The various metrics' theoretical validity as impact indicators is debated constantly, as questions regarding what it is that different metrics measure or express in many facets remain unanswered. This thesis makes two central contributions towards answering these questions. Its first part systematically assesses the status quo of various metrics’ perception and usage by researchers. This assessment serves to determine the significance of metrics in academic daily routines, as well as to identify relevant perceived problems concerning their usage. The challenges identified this way are in later sections of the thesis opposed with concrete measures to be taken during the development of future research metrics and their infrastructure to effectively solve common criticisms regarding current metrics and their use. Proceeding from the first part’s user studies, this thesis’ second part examines the relationship between research metrics and external science communication. It this way addresses a wide research gap with considerable potential implications for metrics’ validity as indicators for quality - the question to which degree these metrics are merely the result of promotion, which respective research publications receive

    Can web indicators be used to estimate the citation impact of conference papers in engineering?

    Get PDF
    A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of Philosophy.Although citation counts are widely used to support research evaluation, they can only reflect academic impacts, whereas research can also be useful outside academia. There is therefore a need for alternative indicators and empirical studies to evaluate them. Whilst many previous studies have investigated alternative indicators for journal articles and books, this thesis explores the importance and suitability of four web indicators for conference papers. These are readership counts from the online reference manager Mendeley and citation counts from Google Patents, Wikipedia and Google Books. To help evaluate these indicators for conference papers, correlations with Scopus citations were evaluated for each alternative indicator and compared with corresponding correlations between alternative indicators and citation counts for journal articles. Four subject areas that value conferences were chosen for the analysis: Computer Science Applications; Computer Software Engineering; Building & Construction Engineering; and Industrial & Manufacturing Engineering. There were moderate correlations between Mendeley readership counts and Scopus citation counts for both journal articles and conference papers in Computer Science Applications and Computer Software. For conference papers in Building & Construction Engineering and Industrial & Manufacturing Engineering, the correlations between Mendeley readers and citation counts are much lower than for journal articles. Thus, in fields where conferences are important, Mendeley readership counts are reasonable impact indicators for conference papers although they are better impact indicators for journal articles. Google Patent citations had low positive correlations with citation counts for both conference papers and journal articles in Software Engineering and Computer Science Applications. There were negative correlations for both conference papers and journal articles in Industrial and Manufacturing Engineering. However, conference papers in Building and Construction Engineering attracted no Google Patent citations. This suggests that there are disciplinary differences but little overall value for Google Patent citations as impact indicators in engineering fields valuing conferences. Wikipedia citations had correlations with Scopus citations that were statistically significantly positive only in Computer Science Applications, whereas the correlations were not statistically significantly different from zero in Building & Construction Engineering, Industrial & Manufacturing Engineering and Software Engineering. Conference papers were less likely to be cited in Wikipedia than journal articles were in all fields, although the difference was minor in Software Engineering. Thus, Wikipedia citations seem to have little value in engineering fields valuing conferences. Google Books citations had positive significant correlations with Scopus-indexed citations for conference papers in all fields except Building & Construction Engineering, where the correlations were not statistically significantly different from zero. Google Books citations seemed to be most valuable impact indicators in Computer Science Applications and Software Engineering, where the correlations were moderate, than in Industrial & Manufacturing Engineering, where the correlations were low. This means that Google Book citations are valuable indicators for conference papers in engineering fields valuing conferences. Although evidence from correlation tests alone is insufficient to judge the value of alternative indicators, the results suggest that Mendeley readers and Google Books citations may be useful for both journal articles and conference papers in engineering fields that value conferences, but not Wikipedia citations or Google Patent citations.Tetfund, Nigeri

    The Intellectual Organisation of History

    Get PDF
    A tradition of scholarship discusses the characteristics of different areas of knowledge, in particular after modern academia compartmentalized them into disciplines. The academic approach is often put to question: are there two or more cultures? Is an ever-increasing specialization the only way to cope with information abundance or are holistic approaches helpful too? What is happening with the digital turn? If these questions are well studied for the sciences, our understanding of how the humanities might differ in their own respect is far less advanced. In particular, modern academia might foster specific patterns of specialization in the humanities. Eventually, the recent rise in the application of digital methods to research, known as the digital humanities, might be introducing structural adaptations through the development of shared research technologies and the advent of organizational practices such as the laboratory. It therefore seems timely and urgent to map the intellectual organization of the humanities. This investigation depends on few traits such as the level of codification, the degree of agreement among scholars, the level of coordination of their efforts. These characteristics can be studied by measuring their influence on the outcomes of scientific communication. In particular, this thesis focuses on history as a discipline using bibliometric methods. In order to explore history in its complexity, an approach to create collaborative citation indexes in the humanities is proposed, resulting in a new dataset comprising monographs, journal articles and citations to primary sources. Historians' publications were found to organize thematically and chronologically, sharing a limited set of core sources across small communities. Core sources act in two ways with respect to the intellectual organization: locally, by adding connectivity within communities, or globally as weak ties across communities. Over recent decades, fragmentation is on the rise in the intellectual networks of historians, and a comparison across a variety of specialisms from the human, natural and mathematical sciences revealed the fragility of such networks across the axes of citation and textual similarities. Humanists organize into more, smaller and scattered topical communities than scientists. A characterisation of history is eventually proposed. Historians produce new historiographical knowledge with a focus on evidence or interpretation. The former aims at providing the community with an agreed-upon factual resource. Interpretive work is instead mainly focused on creating novel perspectives. A second axe refers to two modes of exploration of new ideas: in-breadth, where novelty relates to adding new, previously unknown pieces to the mosaic, or in-depth, if novelty then happens by improving on previous results. All combinations possible, historians tend to focus on in-breadth interpretations, with the immediate consequence that growth accentuates intellectual fragmentation in the absence of further consolidating factors such as theory or technologies. Research on evidence might have a different impact by potentially scaling-up in the digital space, and in so doing influence the modes of interpretation in turn. This process is not dissimilar to the gradual rise in importance of research technologies and collaborative competition in the mathematical and natural sciences. This is perhaps the promise of the digital humanities

    Quantitative methods in research evaluation citation indicators, altmetrics, and artificial intelligence

    Get PDF
    This book critically analyses the value of citation data, altmetrics, and artificial intelligence to support the research evaluation of articles, scholars, departments, universities, countries, and funders. It introduces and discusses indicators that can support research evaluation and analyses their strengths and weaknesses as well as the generic strengths and weaknesses of the use of indicators for research assessment. The book includes evidence of the comparative value of citations and altmetrics in all broad academic fields primarily through comparisons against article level human expert judgements from the UK Research Excellence Framework 2021. It also discusses the potential applications of traditional artificial intelligence and large language models for research evaluation, with large scale evidence for the former. The book concludes that citation data can be informative and helpful in some research fields for some research evaluation purposes but that indicators are never accurate enough to be described as research quality measures. It also argues that AI may be helpful in limited circumstances for some types of research evaluation
    corecore