7 research outputs found

    Argument harvesting using chatbots

    Get PDF
    Much research in computational argumentation assumes that arguments can be obtained in some way. Yet, to improve and apply models of argument, we need methods for acquiring them. Current approaches include argument mining from text, hand coding of arguments by researchers, or generating arguments from knowledge bases. In this paper, we propose a new approach, which we call argument harvesting, that uses a chatbot to enter into a dialogue with a participant to get arguments and counterarguments from him or her. Because it is automated, the chatbot can be used repeatedly in many dialogues, and thereby it can generate a large corpus. We describe the architecture of the chatbot, provide methods for clustering arguments by their similarity and value, and an evaluation of our approach in a case study concerning attitudes of women to participation in sport

    Impact of Argument Type and Concerns in Argumentation with a Chatbot

    Get PDF
    Conversational agents, also known as chatbots, are versatile tools that have the potential of being used in dialogical argumentation. They could possibly be deployed in tasks such as persuasion for behaviour change (e.g. persuading people to eat more fruit, to take regular exercise, etc.) However, to achieve this, there is a need to develop methods for acquiring appropriate arguments and counterargument that reflect both sides of the discussion. For instance, to persuade someone to do regular exercise, the chatbot needs to know counterarguments that the user might have for not doing exercise. To address this need, we present methods for acquiring arguments and counterarguments, and importantly, meta-level information that can be useful for deciding when arguments can be used during an argumentation dialogue. We evaluate these methods in studies with participants and show how harnessing these methods in a chatbot can make it more persuasive

    Computational Persuasion using Chatbots based on Crowdsourced Argument Graphs & Concerns

    Get PDF
    As computing becomes involved in every sphere of life, so too is persuasion a target for applying computer-based solutions. Conversational agents, also known as chatbots, are versatile tools that have the potential of being used as agents in dialogical argumentation systems where the chatbot acts as the persuader and the human agent as the persuadee and thereby offer a costeffective and scalable alternative to in-person consultations To allow the user to type his or her argument in free-text input (as opposed to selecting arguments from a menu) the chatbot needs to be able to (1) “understand” the user’s concern he or she is raising in their argument and (2) give an appropriate counterargument that addresses the user’s concern. In this thesis I describe how to (1) acquire arguments for the construction of the chatbot’s knowledge base with the help of crowdsourcing, (2) how to automatically identify the concerns that arguments address, and (3) how to construct the chatbot’s knowledge base in the form of an argument graph that can be used during persuasive dialogues with users. I evaluated my methods in four case studies that covered several domains (physical activity, meat consumption, UK University Fees and COVID-19 vaccination). In each case study I implemented a chatbot that engaged in argumentative dialogues with participants and measured the participants’ change of stance before and after engaging in a chat with the bot. In all four case studies the chatbot showed statistically significant success persuading people to either consider changing their behaviour or to change their stance

    Strategic argumentation dialogues for persuasion: Framework and experiments based on modelling the beliefs and concerns of the persuadee

    Get PDF
    Persuasion is an important and yet complex aspect of human intelligence. When undertaken through dialogue, the deployment of good arguments, and therefore counterarguments, clearly has a significant effect on the ability to be successful in persuasion. Two key dimensions for determining whether an argument is 'good' in a particular dialogue are the degree to which the intended audience believes the argument and counterarguments, and the impact that the argument has on the concerns of the intended audience. In this paper, we present a framework for modelling persuadees in terms of their beliefs and concerns, and for harnessing these models in optimizing the choice of move in persuasion dialogues. Our approach is based on the Monte Carlo Tree Search which allows optimization in real-time. We provide empirical results of a study with human participants that compares an automated persuasion system based on this technology with a baseline system that does not take the beliefs and concerns into account in its strategy

    Strategic Argumentation Dialogues for Persuasion: Framework and Experiments Based on Modelling the Beliefs and Concerns of the Persuadee

    Get PDF
    Persuasion is an important and yet complex aspect of human intelligence. When undertaken through dialogue, the deployment of good arguments, and therefore counterarguments, clearly has a significant effect on the ability to be successful in persuasion. Two key dimensions for determining whether an argument is good in a particular dialogue are the degree to which the intended audience believes the argument and counterarguments, and the impact that the argument has on the concerns of the intended audience. In this paper, we present a framework for modelling persuadees in terms of their beliefs and concerns, and for harnessing these models in optimizing the choice of move in persuasion dialogues. Our approach is based on the Monte Carlo Tree Search which allows optimization in real-time. We provide empirical results of a study with human participants showing that our automated persuasion system based on this technology is superior to a baseline system that does not take the beliefs and concerns into account in its strategy.Comment: The Data Appendix containing the arguments, argument graphs, assignment of concerns to arguments, preferences over concerns, and assignment of beliefs to arguments, is available at the link http://www0.cs.ucl.ac.uk/staff/a.hunter/papers/unistudydata.zip The code is available at https://github.com/ComputationalPersuasion/MCC

    Delegated updates in epistemic graphs for opponent modelling

    Get PDF
    In an epistemic graph, belief in arguments is represented by probability distributions. Furthermore, the influence that belief in arguments can have on the belief in other arguments is represented by constraints on the probability distributions. Different agents may choose different constraints to describe their reasoning, thus making epistemic graphs extremely flexible tools. A key application for epistemic graphs is modelling participants in persuasion dialogues, with the aim of modelling the change in beliefs as each move in the dialogue is made. This requires mechanisms for updating the model throughout the dialogue. In this paper, we introduce the class of delegated update methods, which harness existing, simpler update methods in order to produce more realistic outputs. In particular, we focus on hypothesized updates, which capture agent's reluctance or susceptibility to belief updates that can be caused by certain factors, such as time of the day, fatigue, dialogue length, and more. We provide a comprehensive range of options for modelling different kinds of agents and we explore a range of properties for categorising the options
    corecore