6,599 research outputs found
Tracking citations and altmetrics for research data: Challenges and opportunities
Methods for determining research quality have long been debated but with little lasting agreement on standards, leading to the emergence of alternative metrics. Altmetrics are a useful supplement to traditional citation metrics, reflecting a variety of measurement points that give different perspectives on how a dataset is used and by whom. A positive development is the integration of a number of research datasets into the ISI Data Citation Index, making datasets searchable and linking them to published articles. Yet access to data resources and tracking the resulting altmetrics depend on specific qualities of the datasets and the systems where they are archived. Though research on altmetrics use is growing, the lack of standardization across datasets and system architecture undermines its generalizability. Without some standards, stakeholders' adoption of altmetrics will be limited
Do altmetrics correlate with citations? Extensive comparison of altmetric indicators with citations from a multidisciplinary perspective
An extensive analysis of the presence of different altmetric indicators
provided by Altmetric.com across scientific fields is presented, particularly
focusing on their relationship with citations. Our results confirm that the
presence and density of social media altmetric counts are still very low and
not very frequent among scientific publications, with 15%-24% of the
publications presenting some altmetric activity and concentrating in the most
recent publications, although their presence is increasing over time.
Publications from the social sciences, humanities and the medical and life
sciences show the highest presence of altmetrics, indicating their potential
value and interest for these fields. The analysis of the relationships between
altmetrics and citations confirms previous claims of positive correlations but
relatively weak, thus supporting the idea that altmetrics do not reflect the
same concept of impact as citations. Also, altmetric counts do not always
present a better filtering of highly cited publications than journal citation
scores. Altmetrics scores (particularly mentions in blogs) are able to identify
highly cited publications with higher levels of precision than journal citation
scores (JCS), but they have a lower level of recall. The value of altmetrics as
a complementary tool of citation analysis is highlighted, although more
research is suggested to disentangle the potential meaning and value of
altmetric indicators for research evaluation
Do altmetrics correlate with the quality of papers? A large-scale empirical study based on F1000Prime data
In this study, we address the question whether (and to what extent,
respectively) altmetrics are related to the scientific quality of papers (as
measured by peer assessments). Only a few studies have previously investigated
the relationship between altmetrics and assessments by peers. In the first
step, we analyse the underlying dimensions of measurement for traditional
metrics (citation counts) and altmetrics - by using principal component
analysis (PCA) and factor analysis (FA). In the second step, we test the
relationship between the dimensions and quality of papers (as measured by the
post-publication peer-review system of F1000Prime assessments) - using
regression analysis. The results of the PCA and FA show that altmetrics operate
along different dimensions, whereas Mendeley counts are related to citation
counts, and tweets form a separate dimension. The results of the regression
analysis indicate that citation-based metrics and readership counts are
significantly more related to quality, than tweets. This result on the one hand
questions the use of Twitter counts for research evaluation purposes and on the
other hand indicates potential use of Mendeley reader counts
The pros and cons of the use of altmetrics in research assessment
© 2020 The Authors. Published by Levi Library Press. This is an open access article available under a Creative Commons licence.
The published version can be accessed at the following link on the publisher’s website: http://doi.org/10.29024/sar.10Many indicators derived from the web have been proposed to supplement citation-based
indicators in support of research assessments. These indicators, often called altmetrics, are
available commercially from Altmetric.com and Elsevier’s Plum Analytics or can be collected
directly. These organisations can also deliver altmetrics to support institutional selfevaluations. The potential advantages of altmetrics for research evaluation are that they
may reflect important non-academic impacts and may appear before citations when an
article is published, thus providing earlier impact evidence. Their disadvantages often
include susceptibility to gaming, data sparsity, and difficulties translating the evidence into
specific types of impact. Despite these limitations, altmetrics have been widely adopted by
publishers, apparently to give authors, editors and readers insights into the level of interest
in recently published articles. This article summarises evidence for and against extending
the adoption of altmetrics to research evaluations. It argues that whilst systematicallygathered altmetrics are inappropriate for important formal research evaluations, they can
play a role in some other contexts. They can be informative when evaluating research units
that rarely produce journal articles, when seeking to identify evidence of novel types of
impact during institutional or other self-evaluations, and when selected by individuals or
groups to support narrative-based non-academic claims. In addition, Mendeley reader
counts are uniquely valuable as early (mainly) scholarly impact indicators to replace
citations when gaming is not possible and early impact evidence is needed. Organisations
using alternative indicators need recruit or develop in-house expertise to ensure that they
are not misused, however
Genesis of Altmetrics or Article-level Metrics for Measuring Efficacy of Scholarly Communications: Current Perspectives
The article-level metrics (ALMs) or altmetrics becomes a new trendsetter in
recent times for measuring the impact of scientific publications and their
social outreach to intended audiences. The popular social networks such as
Facebook, Twitter, and Linkedin and social bookmarks such as Mendeley and
CiteULike are nowadays widely used for communicating research to larger
transnational audiences. In 2012, the San Francisco Declaration on Research
Assessment got signed by the scientific and researchers communities across the
world. This declaration has given preference to the ALM or altmetrics over
traditional but faulty journal impact factor (JIF)-based assessment of career
scientists. JIF does not consider impact or influence beyond citations count as
this count reflected only through Thomson Reuters' Web of Science database.
Furthermore, JIF provides indicator related to the journal, but not related to
a published paper. Thus, altmetrics now becomes an alternative metrics for
performance assessment of individual scientists and their contributed scholarly
publications. This paper provides a glimpse of genesis of altmetrics in
measuring efficacy of scholarly communications and highlights available
altmetric tools and social platforms linking altmetric tools, which are widely
used in deriving altmetric scores of scholarly publications. The paper thus
argues for institutions and policy makers to pay more attention to altmetrics
based indicators for evaluation purpose but cautions that proper safeguards and
validations are needed before their adoption
Studying Relationship between Citation and Altmetrics of Top Chemistry Researches’ Articles
Abstract:
The main objective of the present research is to examine the relationship between the number of citations and the level of altmetrics for testing the validity of these new metrics, at least in terms of being alignment with the test established index. The present research population consist of articles from the top chemistry writers that were profiled at the Scopus Citation Database in 2010. Sample research is the articles by 20 top author. The present research is applied in terms of purpose, and is descriptive and correlative in terms of data collection. Data extraction was performed using Webometric analyst software and citation data was collected from Scopus. SPSS software was used to analyze the data.
The research findings show that the articles in question have little presence on social networks. In terms of the amount of attendance and distribution Mendeley, CiteUlike, Twitter, Facebook, Blogs, Google Plus and News, had the largest number of articles and altmetrics respectively. Also, the results show that Mendeley and Twitter have the most relationship with citations. Also, articles have at least one higher citation average altmetric (25.14%) than those with no altmetric (7.58%). In terms of citations\u27 relationship, the Spearman correlation test showed a strong correlation between the number of Mendeley readers, news, and citations. Also, there was a weak correlation between Twitter, CiteUlike, and citations. Finally, there was not a meaningful relationship between Facebook posts, blog posts, Google plus, and citations
- …