145 research outputs found
Increasing our understanding of altmetrics: identifying factors that are driving both citation and altmetric counts
This study examines a range of factors associating with eventual citation and altmetric counts to a paper. The factors include research collaboration, institution impact, journal impact, journal open accessibility, and field type that will be modelled in association with citation counts, Twitter posts, Facebook posts and Mendeley readers. The results show that the factors driving increased citations are different from those driving increased altmetric events. The altmetric events differ from each other in terms of a few factors. The findings from this study can contribute to the continued development of theoretical models and methodological developments associated with capturing, interpreting, and understanding altmetric events. This work can also aid research policy makers with identifying important factors driving altmetric events
Co-saved, co-tweeted, and co-cited networks
This is an accepted manuscript of an article published by Wiley-Blackwell in Journal of the Association for Information Science and Technology on 14/05/2018, available online: https://doi.org/10.1002/asi.24028
The accepted version of the publication may differ from the final published version.Counts of tweets and Mendeley user libraries have been proposed as altmetric alternatives to citation counts for the impact assessment of articles. Although both have been investigated to discover whether they correlate with article citations, it is not known whether users tend to tweet or save (in Mendeley) the same kinds of articles that they cite. In response, this article compares pairs of articles that are tweeted, saved to a Mendeley library, or cited by the same user, but possibly a different user for each source. The study analyzes 1,131,318 articles published in 2012, with minimum tweeted (10), saved to Mendeley (100), and cited (10) thresholds. The results show surprisingly minor overall overlaps between the three phenomena. The importance of journals for Twitter and the presence of many bots at different levels of activity suggest that this site has little value for impact altmetrics. The moderate differences between patterns of saving and citation suggest that Mendeley can be used for some types of impact assessments, but sensitivity is needed for underlying differences
Measuring Social Media Activity of Scientific Literature: An Exhaustive Comparison of Scopus and Novel Altmetrics Big Data
This paper measures social media activity of 15 broad scientific disciplines
indexed in Scopus database using Altmetric.com data. First, the presence of
Altmetric.com data in Scopus database is investigated, overall and across
disciplines. Second, the correlation between the bibliometric and altmetric
indices is examined using Spearman correlation. Third, a zero-truncated
negative binomial model is used to determine the association of various factors
with increasing or decreasing citations. Lastly, the effectiveness of altmetric
indices to identify publications with high citation impact is comprehensively
evaluated by deploying Area Under the Curve (AUC) - an application of receiver
operating characteristic. Results indicate a rapid increase in the presence of
Altmetric.com data in Scopus database from 10.19% in 2011 to 20.46% in 2015. A
zero-truncated negative binomial model is implemented to measure the extent to
which different bibliometric and altmetric factors contribute to citation
counts. Blog count appears to be the most important factor increasing the
number of citations by 38.6% in the field of Health Professions and Nursing,
followed by Twitter count increasing the number of citations by 8% in the field
of Physics and Astronomy. Interestingly, both Blog count and Twitter count
always show positive increase in the number of citations across all fields.
While there was a positive weak correlation between bibliometric and altmetric
indices, the results show that altmetric indices can be a good indicator to
discriminate highly cited publications, with an encouragingly AUC= 0.725
between highly cited publications and total altmetric count. Overall, findings
suggest that altmetrics could better distinguish highly cited publications.Comment: 34 Pages, 3 Figures, 15 Table
Knowledge Mobilization Strategies: A Qualitative Study
The following content has been withdrawn at request of the authors on the 19th of May 2021:Didegah, A., & Didegah, F. (2020). Knowledge Mobilization Strategies: A Qualitative Study. Informaatiotutkimus, 39(2–3), 50–53. https://doi.org/10.23978/inf.9906
The effect of collaborators on institutions’ scientific impact
The effect of collaborators on institutions scientific impact was examined for 81 institutions
with different degrees of impact and collaboration. Not only collaborators including both core and
peripheral collaborators cite each other more than non-collaborators, but also the first group cites each
other faster than the second group even when self-citations were ignored. Although high impact
institutions and more collaborative institutions receive more citations from their collaborators, it seems
that the number of these citations increases only up to a certain point. In this regard, for example, there is
a slight difference between top and middle collaborative institutions; however, only a small fraction of
collaborators do not cite back the papers of these two groups of institutions. The benefit of collaboration
varies based on the type of collaborators, institutions, papers, citers and the publication year of cited
documents. For example, the effect of collaboration decreases as the institutions level of impact increases.
Hence, collaborating more does not directly imply obtaining higher impact
Which Type of Research is Cited More Often in Wikipedia? A Case Study of PubMed Research
This study examines the characteristics of medical articles cited in Wikipedia and compares them with a sample of medical articles not cited in the platform. The aim is to determine the reasons why some articles are selected as reliable sources for Wikipedia and others are not. The characteristics studied are document type, open access status of article, article topic, article F1000 class and F1000 count, article tweet count, and article news count. The findings show a document type similarity for both cited and uncited sets of articles, with articles, reviews and editorial materials being more visible in both sets. While the articles cover a broad range of topics, the top three topics are the same in both sets. The results also reveal that Wikipedia favors OA articles, although a large number of cited articles are non-OA. Finally, significant, although weak correlations are found between Wiki citation counts and F1000, tweet and news counts. While F1000 and tweet counts correlate negatively with Wikipedia citation counts, news counts show a positive correlation, although the weakest compared to the other correlations
Factors Associating with the Future Citation Impact of Published Articles: A Statistical Modelling Approach
A thesis submitted in partial fulfilment of the
Requirements of the University of Wolverhampton
For the degree of Doctor of Philosophy.This study investigates a range of metrics available when an article is published to see which metrics associate with its eventual citation count. The purposes are to contribute to developing a citation model and to inform policymakers about which predictor variables associate with citations in different fields of science. Despite the complex nature of reasons for citation, some attributes of a paper’s authors, journal, references, abstract, field, country and institutional affiliations, and funding source are known to associate with its citation impact. This thesis investigates some common factors previously assessed and some new factors: journal author internationality; journal citing author internationality; cited journal author internationality; cited journal citing author internationality; impact of the author(s), publishing journal, affiliated institution, and affiliated country; length of paper; abstract and title; number of references; size of the field; number of authors, institutions and countries; abstract readability; and research funding. A sample of articles and proceedings papers in the 22 Essential Science Indicators subject fields from the Web of Science constitute the research data set. Using negative binomial hurdle models, this study simultaneously assesses the above factors using large scale data. The study found very similar behaviours across subject categories and broad areas in terms of factors associating with more citations. Journal and reference factors are the most effective determinants of future citation counts in most subject domains. Individual and international teamwork give a citation advantage in majority of subject areas but inter-institutional teamwork seems not to contribute to citation impact
The Accuracy of Confidence Intervals for Field Normalised Indicators
This is an accepted manuscript of an article published by Elsevier in Journal of Informetrics on 07/04/2017, available online: https://doi.org/10.1016/j.joi.2017.03.004
The accepted version of the publication may differ from the final published version.When comparing the average citation impact of research groups, universities and countries, field normalisation reduces the influence of discipline and time. Confidence intervals for these indicators can help with attempts to infer whether differences between sets of publications are due to chance factors. Although both bootstrapping and formulae have been proposed for these, their accuracy is unknown. In response, this article uses simulated data to systematically compare the accuracy of confidence limits in the simplest possible case, a single field and year. The results suggest that the MNLCS (Mean Normalised Log-transformed Citation Score) confidence interval formula is conservative for large groups but almost always safe, whereas bootstrap MNLCS confidence intervals tend to be accurate but can be unsafe for smaller world or group sample sizes. In contrast, bootstrap MNCS (Mean Normalised Citation Score) confidence intervals can be very unsafe, although their accuracy increases with sample sizes
- …
