4 research outputs found

    Stratified Labelings for Abstract Argumentation

    Full text link
    We introduce stratified labelings as a novel semantical approach to abstract argumentation frameworks. Compared to standard labelings, stratified labelings provide a more fine-grained assessment of the controversiality of arguments using ranks instead of the usual labels in, out, and undecided. We relate the framework of stratified labelings to conditional logic and, in particular, to the System Z ranking functions

    Developments in abstract and assumption-based argumentation and their application in logic programming

    Get PDF
    Logic Programming (LP) and Argumentation are two paradigms for knowledge representation and reasoning under incomplete information. Even though the two paradigms share common features, they constitute mostly separate areas of research. In this thesis, we present novel developments in Argumentation, in particular in Assumption-Based Argumentation (ABA) and Abstract Argumentation (AA), and show how they can 1) extend the understanding of the relationship between the two paradigms and 2) provide solutions to problematic reasoning outcomes in LP. More precisely, we introduce assumption labellings as a novel way to express the semantics of ABA and prove a more straightforward relationship with LP semantics than found in previous work. Building upon these correspondence results, we apply methods for argument construction and conflict detection from ABA, and for conflict resolution from AA, to construct justifications of unexpected or unexplained LP solutions under the answer set semantics. We furthermore characterise reasons for the non-existence of stable semantics in AA and apply these findings to characterise different scenarios in which the computation of meaningful solutions in LP under the answer set semantics fails.Open Acces

    Representation and learning schemes for argument stance mining.

    Get PDF
    Argumentation is a key part of human interaction. Used introspectively, it searches for the truth, by laying down argument for and against positions. As a mediation tool, it can be used to search for compromise between multiple human agents. For this purpose, theories of argumentation have been in development since the Ancient Greeks in order to formalise the process and therefore remove the human imprecision from it. From this practice the process of argument mining has emerged. As human interaction has moved from the small scale of one-to-one (or few-to-few) debates to large scale discussions where tens of thousands of participants can express their opinion in real time, the importance of argument mining has grown while its feasibility in a manual annotation setting has diminished and relied mainly on a human-defined heuristics to process the data. This underlines the importance of a new generation of computational tools that can automate this process on a larger scale. In this thesis we study argument stance detection, one of the steps involved in the argument mining workflow. We demonstrate how we can use data of varying reliability in order to mine argument stance in social media data. We investigate a spectrum of techniques, from completely unsupervised classification of stance using a sentiment lexicon, automated computation of a regularised stance lexicon, automated computation of a lexicon with modifiers, and the use of a lexicon with modifiers as a temporal feature model for more complex classification algorithms. We find that the addition of contextual information enhances unsupervised stance classification, within reason, and that multi-strategy algorithms that combine multiple heuristics by ordering them from the precise to the general tend to outperform other approaches by a large margin. Focusing then on building a stance lexicon, we find that optimising such lexicons using an empirical risk minimisation framework allows us to regularise them to a higher degree than competing probabilistic techniques, which helps us learn better lexicons from noisy data. We also conclude that adding local context (neighbouring words) information during the learning phase of the lexicons tends to produce more accurate results at the cost of robustness, since part of the weights is distributed from the words with a class valence to the contextual words. Finally, when investigating the use of lexicons to build feature models for traditional machine learning techniques, simple lexicons (without context) seem to perform overall as well as more complex ones, and better than purely semantic representations. We also find that word-level feature models tend to outperform sentence and instance-level representations, but that they do not benefit as much from being augmented by lexicon knowledge.This research programme was carried out in collaboration with the University of Glasgow, Department of Computer Science
    corecore