1,820 research outputs found
Building automated vandalism detection tools for Wikidata
Wikidata, like Wikipedia, is a knowledge base that anyone can edit. This open
collaboration model is powerful in that it reduces barriers to participation
and allows a large number of people to contribute. However, it exposes the
knowledge base to the risk of vandalism and low-quality contributions. In this
work, we build on past work detecting vandalism in Wikipedia to detect
vandalism in Wikidata. This work is novel in that identifying damaging changes
in a structured knowledge-base requires substantially different feature
engineering work than in a text-based wiki like Wikipedia. We also discuss the
utility of these classifiers for reducing the overall workload of vandalism
patrollers in Wikidata. We describe a machine classification strategy that is
able to catch 89% of vandalism while reducing patrollers' workload by 98%, by
drawing lightly from contextual features of an edit and heavily from the
characteristics of the user making the edit
A Corpus of Sentence-level Revisions in Academic Writing: A Step towards Understanding Statement Strength in Communication
The strength with which a statement is made can have a significant impact on
the audience. For example, international relations can be strained by how the
media in one country describes an event in another; and papers can be rejected
because they overstate or understate their findings. It is thus important to
understand the effects of statement strength. A first step is to be able to
distinguish between strong and weak statements. However, even this problem is
understudied, partly due to a lack of data. Since strength is inherently
relative, revisions of texts that make claims are a natural source of data on
strength differences. In this paper, we introduce a corpus of sentence-level
revisions from academic writing. We also describe insights gained from our
annotation efforts for this task.Comment: 6 pages, to appear in Proceedings of ACL 2014 (short paper
A matter of words: NLP for quality evaluation of Wikipedia medical articles
Automatic quality evaluation of Web information is a task with many fields of
applications and of great relevance, especially in critical domains like the
medical one. We move from the intuition that the quality of content of medical
Web documents is affected by features related with the specific domain. First,
the usage of a specific vocabulary (Domain Informativeness); then, the adoption
of specific codes (like those used in the infoboxes of Wikipedia articles) and
the type of document (e.g., historical and technical ones). In this paper, we
propose to leverage specific domain features to improve the results of the
evaluation of Wikipedia medical articles. In particular, we evaluate the
articles adopting an "actionable" model, whose features are related to the
content of the articles, so that the model can also directly suggest strategies
for improving a given article quality. We rely on Natural Language Processing
(NLP) and dictionaries-based techniques in order to extract the bio-medical
concepts in a text. We prove the effectiveness of our approach by classifying
the medical articles of the Wikipedia Medicine Portal, which have been
previously manually labeled by the Wiki Project team. The results of our
experiments confirm that, by considering domain-oriented features, it is
possible to obtain sensible improvements with respect to existing solutions,
mainly for those articles that other approaches have less correctly classified.
Other than being interesting by their own, the results call for further
research in the area of domain specific features suitable for Web data quality
assessment
Neural Based Statement Classification for Biased Language
Biased language commonly occurs around topics which are of controversial
nature, thus, stirring disagreement between the different involved parties of a
discussion. This is due to the fact that for language and its use,
specifically, the understanding and use of phrases, the stances are cohesive
within the particular groups. However, such cohesiveness does not hold across
groups.
In collaborative environments or environments where impartial language is
desired (e.g. Wikipedia, news media), statements and the language therein
should represent equally the involved parties and be neutrally phrased. Biased
language is introduced through the presence of inflammatory words or phrases,
or statements that may be incorrect or one-sided, thus violating such
consensus.
In this work, we focus on the specific case of phrasing bias, which may be
introduced through specific inflammatory words or phrases in a statement. For
this purpose, we propose an approach that relies on a recurrent neural networks
in order to capture the inter-dependencies between words in a phrase that
introduced bias.
We perform a thorough experimental evaluation, where we show the advantages
of a neural based approach over competitors that rely on word lexicons and
other hand-crafted features in detecting biased language. We are able to
distinguish biased statements with a precision of P=0.92, thus significantly
outperforming baseline models with an improvement of over 30%. Finally, we
release the largest corpus of statements annotated for biased language.Comment: The Twelfth ACM International Conference on Web Search and Data
Mining, February 11--15, 2019, Melbourne, VIC, Australi
Calculating and Presenting Trust in Collaborative Content
Collaborative functionality is increasingly prevalent in Internet applications. Such functionality permits individuals to add -- and sometimes modify -- web content, often with minimal barriers to entry. Ideally, large bodies of knowledge can be amassed and shared in this manner. However, such software also provides a medium for biased individuals, spammers, and nefarious persons to operate. By computing trust/reputation for participating agents and/or the content they generate, one can identify quality contributions.
In this work, we survey the state-of-the-art for calculating trust in collaborative content. In particular, we examine four proposals from literature based on: (1) content persistence, (2) natural-language processing, (3) metadata properties, and (4) incoming link quantity. Though each technique can be applied broadly, Wikipedia provides a focal point for discussion. Finally, having critiqued how trust values are calculated, we analyze how the presentation of these values can benefit end-users and application security
The Evolution of Wikipedia's Norm Network
Social norms have traditionally been difficult to quantify. In any particular
society, their sheer number and complex interdependencies often limit a
system-level analysis. One exception is that of the network of norms that
sustain the online Wikipedia community. We study the fifteen-year evolution of
this network using the interconnected set of pages that establish, describe,
and interpret the community's norms. Despite Wikipedia's reputation for
\textit{ad hoc} governance, we find that its normative evolution is highly
conservative. The earliest users create norms that both dominate the network
and persist over time. These core norms govern both content and interpersonal
interactions using abstract principles such as neutrality, verifiability, and
assume good faith. As the network grows, norm neighborhoods decouple
topologically from each other, while increasing in semantic coherence. Taken
together, these results suggest that the evolution of Wikipedia's norm network
is akin to bureaucratic systems that predate the information age.Comment: 22 pages, 9 figures. Matches published version. Data available at
http://bit.ly/wiki_nor
Building linguistic corpora from Wikipedia articles and discussions
Wikipedia is a valuable resource, useful as a lingustic corpus or a dataset for many kinds of research. We built corpora from Wikipedia articles and talk pages in the I5 format, a TEI customisation used in the German Reference Corpus (Deutsches Referenzkorpus - DeReKo). Our approach is a two-stage conversion combining parsing using the Sweble parser, and transformation using XSLT stylesheets. The conversion approach is able to successfully generate rich and valid corpora regardless of languages. We also introduce a method to segment user contributions in talk pages into postings
- …