2,760 research outputs found
Joint RNN Model for Argument Component Boundary Detection
Argument Component Boundary Detection (ACBD) is an important sub-task in
argumentation mining; it aims at identifying the word sequences that constitute
argument components, and is usually considered as the first sub-task in the
argumentation mining pipeline. Existing ACBD methods heavily depend on
task-specific knowledge, and require considerable human efforts on
feature-engineering. To tackle these problems, in this work, we formulate ACBD
as a sequence labeling problem and propose a variety of Recurrent Neural
Network (RNN) based methods, which do not use domain specific or handcrafted
features beyond the relative position of the sentence in the document. In
particular, we propose a novel joint RNN model that can predict whether
sentences are argumentative or not, and use the predicted results to more
precisely detect the argument component boundaries. We evaluate our techniques
on two corpora from two different genres; results suggest that our joint RNN
model obtain the state-of-the-art performance on both datasets.Comment: 6 pages, 3 figures, submitted to IEEE SMC 201
MythQA: Query-Based Large-Scale Check-Worthy Claim Detection through Multi-Answer Open-Domain Question Answering
Check-worthy claim detection aims at providing plausible misinformation to
downstream fact-checking systems or human experts to check. This is a crucial
step toward accelerating the fact-checking process. Many efforts have been put
into how to identify check-worthy claims from a small scale of pre-collected
claims, but how to efficiently detect check-worthy claims directly from a
large-scale information source, such as Twitter, remains underexplored. To fill
this gap, we introduce MythQA, a new multi-answer open-domain question
answering(QA) task that involves contradictory stance mining for query-based
large-scale check-worthy claim detection. The idea behind this is that
contradictory claims are a strong indicator of misinformation that merits
scrutiny by the appropriate authorities. To study this task, we construct
TweetMythQA, an evaluation dataset containing 522 factoid multi-answer
questions based on controversial topics. Each question is annotated with
multiple answers. Moreover, we collect relevant tweets for each distinct
answer, then classify them into three categories: "Supporting", "Refuting", and
"Neutral". In total, we annotated 5.3K tweets. Contradictory evidence is
collected for all answers in the dataset. Finally, we present a baseline system
for MythQA and evaluate existing NLP models for each system component using the
TweetMythQA dataset. We provide initial benchmarks and identify key challenges
for future models to improve upon. Code and data are available at:
https://github.com/TonyBY/Myth-QAComment: Accepted by SIGIR 202
Exploring the Potential of Large Language Models in Computational Argumentation
Computational argumentation has become an essential tool in various fields,
including artificial intelligence, law, and public policy. It is an emerging
research field in natural language processing (NLP) that attracts increasing
attention. Research on computational argumentation mainly involves two types of
tasks: argument mining and argument generation. As large language models (LLMs)
have demonstrated strong abilities in understanding context and generating
natural language, it is worthwhile to evaluate the performance of LLMs on
various computational argumentation tasks. This work aims to embark on an
assessment of LLMs, such as ChatGPT, Flan models and LLaMA2 models, under
zero-shot and few-shot settings within the realm of computational
argumentation. We organize existing tasks into 6 main classes and standardise
the format of 14 open-sourced datasets. In addition, we present a new benchmark
dataset on counter speech generation, that aims to holistically evaluate the
end-to-end performance of LLMs on argument mining and argument generation.
Extensive experiments show that LLMs exhibit commendable performance across
most of these datasets, demonstrating their capabilities in the field of
argumentation. We also highlight the limitations in evaluating computational
argumentation and provide suggestions for future research directions in this
field
Argumentation Mining in User-Generated Web Discourse
The goal of argumentation mining, an evolving research field in computational
linguistics, is to design methods capable of analyzing people's argumentation.
In this article, we go beyond the state of the art in several ways. (i) We deal
with actual Web data and take up the challenges given by the variety of
registers, multiple domains, and unrestricted noisy user-generated Web
discourse. (ii) We bridge the gap between normative argumentation theories and
argumentation phenomena encountered in actual data by adapting an argumentation
model tested in an extensive annotation study. (iii) We create a new gold
standard corpus (90k tokens in 340 documents) and experiment with several
machine learning methods to identify argument components. We offer the data,
source codes, and annotation guidelines to the community under free licenses.
Our findings show that argumentation mining in user-generated Web discourse is
a feasible but challenging task.Comment: Cite as: Habernal, I. & Gurevych, I. (2017). Argumentation Mining in
User-Generated Web Discourse. Computational Linguistics 43(1), pp. 125-17
論述における非明示的推論の解析:オーバーナイトアプローチ
Tohoku University博士(情報科学)thesi
Argument mining: A machine learning perspective
Argument mining has recently become a hot topic, attracting the interests of several and diverse research communities, ranging from artificial intelligence, to computational linguistics, natural language processing, social and philosophical sciences. In this paper, we attempt to describe the problems and challenges of argument mining from a machine learning angle. In particular, we advocate that machine learning techniques so far have been under-exploited, and that a more proper standardization of the problem, also with regards to the underlying argument model, could provide a crucial element to develop better systems
Credibility analysis of textual claims with explainable evidence
Despite being a vast resource of valuable information, the Web has been polluted by the spread of false claims. Increasing hoaxes, fake news, and misleading information on the Web have given rise to many fact-checking websites that manually assess these doubtful claims. However, the rapid speed and large scale of misinformation spread have become the bottleneck for manual verification. This calls for credibility assessment tools that can automate this verification process. Prior works in this domain make strong assumptions about the structure of the claims and the communities where they are made. Most importantly, black-box techniques proposed in prior works lack the ability to explain why a certain statement is deemed credible or not. To address these limitations, this dissertation proposes a general framework for automated credibility assessment that does not make any assumption about the structure or origin of the claims. Specifically, we propose a feature-based model, which automatically retrieves relevant articles about the given claim and assesses its credibility by capturing the mutual interaction between the language style of the relevant articles, their stance towards the claim, and the trustworthiness of the underlying web sources. We further enhance our credibility assessment approach and propose a neural-network-based model. Unlike the feature-based model, this model does not rely on feature engineering and external lexicons. Both our models make their assessments interpretable by extracting explainable evidence from judiciously selected web sources.
We utilize our models and develop a Web interface, CredEye, which enables users to automatically assess the credibility of a textual claim and dissect into the assessment by browsing through judiciously and automatically selected evidence snippets. In addition, we study the problem of stance classification and propose a neural-network-based model for predicting the stance of diverse user perspectives regarding the controversial claims. Given a controversial claim and a user comment, our stance classification model predicts whether the user comment is supporting or opposing the claim.Das Web ist eine riesige Quelle wertvoller Informationen, allerdings wurde es durch die Verbreitung von Falschmeldungen verschmutzt. Eine zunehmende Anzahl an Hoaxes, Falschmeldungen und irreführenden Informationen im Internet haben viele Websites hervorgebracht, auf denen die Fakten überprüft und zweifelhafte Behauptungen manuell bewertet werden. Die rasante Verbreitung großer Mengen von Fehlinformationen sind jedoch zum Engpass für die manuelle Überprüfung geworden. Dies erfordert Tools zur Bewertung der Glaubwürdigkeit, mit denen dieser Überprüfungsprozess automatisiert werden kann. In früheren Arbeiten in diesem Bereich werden starke Annahmen gemacht über die Struktur der Behauptungen und die Portale, in denen sie gepostet werden. Vor allem aber können die Black-Box-Techniken, die in früheren Arbeiten vorgeschlagen wurden, nicht erklären, warum eine bestimmte Aussage als glaubwürdig erachtet wird oder nicht. Um diesen Einschränkungen zu begegnen, wird in dieser Dissertation ein allgemeines Framework für die automatisierte Bewertung der Glaubwürdigkeit vorgeschlagen, bei dem keine Annahmen über die Struktur oder den Ursprung der Behauptungen gemacht werden. Insbesondere schlagen wir ein featurebasiertes Modell vor, das automatisch relevante Artikel zu einer bestimmten Behauptung abruft und deren Glaubwürdigkeit bewertet, indem die gegenseitige Interaktion zwischen dem Sprachstil der relevanten Artikel, ihre Haltung zur Behauptung und der Vertrauenswürdigkeit der zugrunde liegenden Quellen erfasst wird. Wir verbessern unseren Ansatz zur Bewertung der Glaubwürdigkeit weiter und schlagen ein auf neuronalen Netzen basierendes Modell vor. Im Gegensatz zum featurebasierten Modell ist dieses Modell nicht auf Feature-Engineering und externe Lexika angewiesen. Unsere beiden Modelle machen ihre Einschätzungen interpretierbar, indem sie erklärbare Beweise aus sorgfältig ausgewählten Webquellen extrahieren. Wir verwenden unsere Modelle zur Entwicklung eines Webinterfaces, CredEye, mit dem Benutzer die Glaubwürdigkeit einer Behauptung in Textform automatisch bewerten und verstehen können, indem sie automatisch ausgewählte Beweisstücke einsehen. Darüber hinaus untersuchen wir das Problem der Positionsklassifizierung und schlagen ein auf neuronalen Netzen basierendes Modell vor, um die Position verschiedener Benutzerperspektiven in Bezug auf die umstrittenen Behauptungen vorherzusagen. Bei einer kontroversen Behauptung und einem Benutzerkommentar sagt unser Einstufungsmodell voraus, ob der Benutzerkommentar die Behauptung unterstützt oder ablehnt
Corpus Wide Argument Mining -- a Working Solution
One of the main tasks in argument mining is the retrieval of argumentative
content pertaining to a given topic. Most previous work addressed this task by
retrieving a relatively small number of relevant documents as the initial
source for such content. This line of research yielded moderate success, which
is of limited use in a real-world system. Furthermore, for such a system to
yield a comprehensive set of relevant arguments, over a wide range of topics,
it requires leveraging a large and diverse corpus in an appropriate manner.
Here we present a first end-to-end high-precision, corpus-wide argument mining
system. This is made possible by combining sentence-level queries over an
appropriate indexing of a very large corpus of newspaper articles, with an
iterative annotation scheme. This scheme addresses the inherent label bias in
the data and pinpoints the regions of the sample space whose manual labeling is
required to obtain high-precision among top-ranked candidates
- …