779 research outputs found
A demonstration of multi-party dialogue using virtual coaches:The first council of coaches demonstrator
We demonstrate here the first Technical Demonstrator for Council of Coaches a project that is aiming to deploy a platform for virtual health coaching, and will incorporate computational models of argument and dialogu
Recommended from our members
Computational Models of Argument Structure and Argument Quality for Understanding Misinformation
With the continuing spread of misinformation and disinformation online, it is of increasing importance to develop combating mechanisms at scale in the form of automated systems that can find checkworthy information, detect fallacious argumentation of online content, retrieve relevant evidence from authoritative sources and analyze the veracity of claims given the retrieved evidence. The robustness and applicability of these systems depend on the availability of annotated resources to train machine learning models in a supervised fashion, as well as machine learning models that capture patterns beyond domain-specific lexical clues or genre-specific stylistic insights. In this thesis, we investigate the role of models for argument structure and argument quality in improving tasks relevant to fact-checking and furthering our understanding of misinformation and disinformation. We contribute to argumentation mining, misinformation detection, and fact-checking by releasing multiple annotated datasets, developing unified models across datasets and task formulations, and analyzing the vulnerabilities of such models in adversarial settings.
We start by studying the argument structure's role in two downstream tasks related to fact-checking. As it is essential to differentiate factual knowledge from opinionated text, we develop a model for detecting the type of news articles (factual or opinionated) using highly transferable argumentation-based features. We also show the potential of argumentation features to predict the checkworthiness of information in news articles and provide the first multi-layer annotated corpus for argumentation and fact-checking.
We then study qualitative aspects of arguments through models for fallacy recognition. To understand the reasoning behind checkworthiness and the relation of argumentative fallacies to fake content, we develop an annotation scheme of fallacies in fact-checked content and investigate avenues for automating the detection of such fallacies considering single- and multi-dataset training. Using instruction-based prompting, we introduce a unified model for recognizing twenty-eight fallacies across five fallacy datasets. We also use this model to explain the checkworthiness of statements in two domains.
Next, we show our models for end-to-end fact-checking of statements that include finding the relevant evidence document and sentence from a collection of documents and then predicting the veracity of the given statements using the retrieved evidence. We also analyze the robustness of end-to-end fact extraction and verification by generating adversarial statements and addressing areas for improvements for models under adversarial attacks. Finally, we show that evidence-based verification is essential for fine-grained claim verification by modeling the human-provided justifications with the gold veracity labels
A Multi-Aspect Evaluation Framework for Comments on the Social Web
Users' reviews, comments and votes on the Social Web form the modern version of word-of-mouth communication, which has a huge impact on people’s habits and businesses. Nonetheless, there are only few attempts to formally model and analyze them using Computational Models of Argument, which achieved a first significant step in bringing these two fields closer. In this paper, we attempt their further integration by formalizing standard features of the Social Web, such as commentary and social voting, and by proposing methods for the evaluation of the comments' quality and acceptance
Argument Strength in Probabilistic Argumentation Using Confirmation Theory
It is common for people to remark that a particular argument is a strong (or weak) argument. Having a handle on the relative strengths of arguments can help in deciding on which arguments to consider, and on which to present to others in a discussion. In computational models of argument, there is a need for a deeper understanding of argument strength. Our approach in this paper is to draw on confirmation theory for quantifying argument strength, and harness this in a framework based on probabilistic argumentation. We show how we can calculate strength based on the structure of the argument involving defeasible rules. The insights appear transferable to a variety of other structured argumentation systems
Online Handbook of Argumentation for AI: Volume 4
This volume contains revised versions of the papers selected for the fourth
volume of the Online Handbook of Argumentation for AI (OHAAI). Previously,
formal theories of argument and argument interaction have been proposed and
studied, and this has led to the more recent study of computational models of
argument. Argumentation, as a field within artificial intelligence (AI), is
highly relevant for researchers interested in symbolic representations of
knowledge and defeasible reasoning. The purpose of this handbook is to provide
an open access and curated anthology for the argumentation research community.
OHAAI is designed to serve as a research hub to keep track of the latest and
upcoming PhD-driven research on the theory and application of argumentation in
all areas related to AI
Strategic Sequences of Arguments for Persuasion Using Decision Trees
Persuasion is an activity that involves one party (the persuader) trying to induce another party (the persuadee) to believe or do something. For this, it can be advantageous forthe persuader to have a model of the persuadee. Recently, some proposals in the field of computational models of argument have been made for probabilistic models of what the persuadee knows about, or believes. However, these developments have not systematically harnessed established notions in decision theory for maximizing the outcome of a dialogue. To address this, we present a general framework for representing persuasion dialogues as a decision tree, and for using decision rules for selecting moves. Furthermore, we provide some empirical results showing how some well-known decision rules perform, and make observations about their general behaviour in the context of dialogues where there is uncertainty about the accuracy of the user model
Toward Artificial Argumentation
The field of computational models of argument is emerging as an important aspect of artificial intelligence research. The reason for this is based on the recognition that if we are to develop robust intelligent systems, then it is imperative that they can handle incomplete and inconsistent information in a way that somehow emulates the way humans tackle such a complex task. And one of the key ways that humans do this is to use argumentation - either internally, by evaluating arguments and counterarguments - or externally, by for instance entering into a discussion or debate where arguments are exchanged. As we report in this review, recent developments in the field are leading to technology for artificial argumentation, in the legal, medical, and e-government domains, and interesting tools for argument mining, for debating technologies, and for argumentation solvers are emerging
- …