9,263 research outputs found
Wikipedia and Westminster: Quality and Dynamics of Wikipedia Pages about UK Politicians
Wikipedia is a major source of information providing a large variety of
content online, trusted by readers from around the world. Readers go to
Wikipedia to get reliable information about different subjects, one of the most
popular being living people, and especially politicians. While a lot is known
about the general usage and information consumption on Wikipedia, less is known
about the life-cycle and quality of Wikipedia articles in the context of
politics. The aim of this study is to quantify and qualify content production
and consumption for articles about politicians, with a specific focus on UK
Members of Parliament (MPs). First, we analyze spatio-temporal patterns of
readers' and editors' engagement with MPs' Wikipedia pages, finding huge peaks
of attention during election times, related to signs of engagement on other
social media (e.g. Twitter). Second, we quantify editors' polarisation and find
that most editors specialize in a specific party and choose specific news
outlets as references. Finally we observe that the average citation quality is
pretty high, with statements on 'Early life and career' missing citations most
often (18%).Comment: A preprint of accepted publication at the 31ST ACM Conference on
Hypertext and Social Media (HT'20
What increases (social) media attention: Research impact, author prominence or title attractiveness?
Do only major scientific breakthroughs hit the news and social media, or does
a 'catchy' title help to attract public attention? How strong is the connection
between the importance of a scientific paper and the (social) media attention
it receives? In this study we investigate these questions by analysing the
relationship between the observed attention and certain characteristics of
scientific papers from two major multidisciplinary journals: Nature
Communication (NC) and Proceedings of the National Academy of Sciences (PNAS).
We describe papers by features based on the linguistic properties of their
titles and centrality measures of their authors in their co-authorship network.
We identify linguistic features and collaboration patterns that might be
indicators for future attention, and are characteristic to different journals,
research disciplines, and media sources.Comment: Paper presented at 23rd International Conference on Science and
Technology Indicators (STI 2018) in Leiden, The Netherland
The pros and cons of the use of altmetrics in research assessment
© 2020 The Authors. Published by Levi Library Press. This is an open access article available under a Creative Commons licence.
The published version can be accessed at the following link on the publisher’s website: http://doi.org/10.29024/sar.10Many indicators derived from the web have been proposed to supplement citation-based
indicators in support of research assessments. These indicators, often called altmetrics, are
available commercially from Altmetric.com and Elsevier’s Plum Analytics or can be collected
directly. These organisations can also deliver altmetrics to support institutional selfevaluations. The potential advantages of altmetrics for research evaluation are that they
may reflect important non-academic impacts and may appear before citations when an
article is published, thus providing earlier impact evidence. Their disadvantages often
include susceptibility to gaming, data sparsity, and difficulties translating the evidence into
specific types of impact. Despite these limitations, altmetrics have been widely adopted by
publishers, apparently to give authors, editors and readers insights into the level of interest
in recently published articles. This article summarises evidence for and against extending
the adoption of altmetrics to research evaluations. It argues that whilst systematicallygathered altmetrics are inappropriate for important formal research evaluations, they can
play a role in some other contexts. They can be informative when evaluating research units
that rarely produce journal articles, when seeking to identify evidence of novel types of
impact during institutional or other self-evaluations, and when selected by individuals or
groups to support narrative-based non-academic claims. In addition, Mendeley reader
counts are uniquely valuable as early (mainly) scholarly impact indicators to replace
citations when gaming is not possible and early impact evidence is needed. Organisations
using alternative indicators need recruit or develop in-house expertise to ensure that they
are not misused, however
Case study: embedding 'A vision of Britain through time' as a resource for academic research and learning
As part of the 'JISC e-Content and Digitisation Programmes: Impact and Embedding of Digitised Resources,' this case study explores the impacts of the A Vision of Britain Through Time website (http://www.visionofbritain.org.uk/) on academic research and learning. It is complemented by 'Impact Report on ‘A Vision of Britain through Time’ 2004-10: Investigating the current use and impact of a popular digital resource for local history research.
- …