37,025 research outputs found
Online Misinformation: Challenges and Future Directions
Misinformation has become a common part of our digital media environments and it is compromising the ability of our societies to form informed opinions. It generates misperceptions, which have affected the decision making processes in many domains, including economy, health, environment, and elections, among others. Misinformation and its generation, propagation, impact, and management is being studied through a variety of lenses (computer science, social science, journalism, psychology, etc.) since it widely affects multiple aspects of society. In this paper we analyse the phenomenon of misinformation from a technological point of view.We study the current socio-technical advancements towards addressing the problem, identify some of the key limitations of current technologies, and propose some ideas to target such limitations. The goal of this position paper is to reflect on the current state of the art and to stimulate discussions on the future design and development of algorithms, methodologies, and applications
Overview of the CLEF–2022 CheckThat! Lab on Fighting the COVID-19 Infodemic and Fake News Detection
We describe the fifth edition of the CheckThat! lab, part of the 2022 Conference and Labs of the Evaluation Forum (CLEF). The lab evaluates technology supporting tasks related to factuality in multiple languages: Arabic, Bulgarian, Dutch, English, German, Spanish, and Turkish. Task 1 asks to identify relevant claims in tweets in terms of check-worthiness, verifiability, harmfullness, and attention-worthiness. Task 2 asks to detect previously fact-checked claims that could be relevant to fact-check a new claim. It targets both tweets and political debates/speeches. Task 3 asks to predict the veracity of the main claim in a news article. CheckThat! was the most popular lab at CLEF-2022 in terms of team registrations: 137 teams. More than one-third (37%) of them actually participated: 18, 7, and 26 teams submitted 210, 37, and 126 official runs for tasks 1, 2, and 3, respectively.</p
Automated Fact-Checking for Assisting Human Fact-Checkers
The reporting and analysis of current events around the globe has expanded
from professional, editor-lead journalism all the way to citizen journalism.
Politicians and other key players enjoy direct access to their audiences
through social media, bypassing the filters of official cables or traditional
media. However, the multiple advantages of free speech and direct communication
are dimmed by the misuse of the media to spread inaccurate or misleading
claims. These phenomena have led to the modern incarnation of the fact-checker
-- a professional whose main aim is to examine claims using available evidence
to assess their veracity. As in other text forensics tasks, the amount of
information available makes the work of the fact-checker more difficult. With
this in mind, starting from the perspective of the professional fact-checker,
we survey the available intelligent technologies that can support the human
expert in the different steps of her fact-checking endeavor. These include
identifying claims worth fact-checking; detecting relevant previously
fact-checked claims; retrieving relevant evidence to fact-check a claim; and
actually verifying a claim. In each case, we pay attention to the challenges in
future work and the potential impact on real-world fact-checking.Comment: fact-checking, fact-checkers, check-worthiness, detecting previously
fact-checked claims, evidence retrieva
From Chaos to Clarity: Claim Normalization to Empower Fact-Checking
With the rise of social media, users are exposed to many misleading claims.
However, the pervasive noise inherent in these posts presents a challenge in
identifying precise and prominent claims that require verification. Extracting
the important claims from such posts is arduous and time-consuming, yet it is
an underexplored problem. Here, we aim to bridge this gap. We introduce a novel
task, Claim Normalization (aka ClaimNorm), which aims to decompose complex and
noisy social media posts into more straightforward and understandable forms,
termed normalized claims. We propose CACN, a pioneering approach that leverages
chain-of-thought and claim check-worthiness estimation, mimicking human
reasoning processes, to comprehend intricate claims. Moreover, we capitalize on
the in-context learning capabilities of large language models to provide
guidance and to improve claim normalization. To evaluate the effectiveness of
our proposed model, we meticulously compile a comprehensive real-world dataset,
CLAN, comprising more than 6k instances of social media posts alongside their
respective normalized claims. Our experiments demonstrate that CACN outperforms
several baselines across various evaluation measures. Finally, our rigorous
error analysis validates CACN's capabilities and pitfalls.Comment: Accepted at Findings EMNLP202
Human-centered NLP Fact-checking: Co-Designing with Fact-checkers using Matchmaking for AI
A key challenge in professional fact-checking is its limited scalability in
relation to the magnitude of false information. While many Natural Language
Processing (NLP) tools have been proposed to enhance fact-checking efficiency
and scalability, both academic research and fact-checking organizations report
limited adoption of such tooling due to insufficient alignment with
fact-checker practices, values, and needs. To address this gap, we investigate
a co-design method, Matchmaking for AI, which facilitates fact-checkers,
designers, and NLP researchers to collaboratively discover what fact-checker
needs should be addressed by technology and how. Our co-design sessions with 22
professional fact-checkers yielded a set of 11 novel design ideas. They assist
in information searching, processing, and writing tasks for efficient and
personalized fact-checking; help fact-checkers proactively prepare for future
misinformation; monitor their potential biases; and support internal
organization collaboration. Our work offers implications for human-centered
fact-checking research and practice and AI co-design research
CrowdChecked: Detecting Previously Fact-Checked Claims in Social Media
While there has been substantial progress in developing systems to automate
fact-checking, they still lack credibility in the eyes of the users. Thus, an
interesting approach has emerged: to perform automatic fact-checking by
verifying whether an input claim has been previously fact-checked by
professional fact-checkers and to return back an article that explains their
decision. This is a sensible approach as people trust manual fact-checking, and
as many claims are repeated multiple times. Yet, a major issue when building
such systems is the small number of known tweet--verifying article pairs
available for training. Here, we aim to bridge this gap by making use of crowd
fact-checking, i.e., mining claims in social media for which users have
responded with a link to a fact-checking article. In particular, we mine a
large-scale collection of 330,000 tweets paired with a corresponding
fact-checking article. We further propose an end-to-end framework to learn from
this noisy data based on modified self-adaptive training, in a distant
supervision scenario. Our experiments on the CLEF'21 CheckThat! test set show
improvements over the state of the art by two points absolute. Our code and
datasets are available at https://github.com/mhardalov/crowdchecked-claimsComment: Accepted to AACL-IJCNLP 2022 (Main Conference
- …