13 research outputs found

    Formalizing value-guided argumentation for ethical systems design

    Get PDF
    The persuasiveness of an argument depends on the values promoted and demoted by the position defended. This idea, inspired by Perelman’s work on argumentation, has become a prominent theme in artificial intelligence research on argumentation since the work by Hafner and Berman on teleological reasoning in the law, and was further developed by Bench-Capon in his value-based argumentation frameworks. One theme in the study of value-guided argumentation is the comparison of values. Formal models involving value comparison typically use either qualitative or quantitative primitives. In this paper, techniques connecting qualitative and quantitative primitives recently developed for evidential argumentation are applied to value-guided argumentation. By developing the theoretical understanding of intelligent systems guided by embedded values, the paper is a step towards ethical systems design, much needed in these days of ever more pervasive AI techniques. Keywords Argumentation Ethical systems Teleological reasoning Value

    Цінності в юридичній аргументації: особливості використання

    Get PDF
    Статтю присвячено аналізу категорії «цінності» в юридичному дискурсі. Через призму теорії юридичної аргументації розглянуто питання можливості звернення до суспільних цінностей у процесі прийняття рішень суддею, особливості побудови та використання ціннісних аргументів

    Towards a framework for computational persuasion with applications in behaviour change

    Get PDF
    Persuasion is an activity that involves one party trying to induce another party to believe something or to do something. It is an important and multifaceted human facility. Obviously, sales and marketing is heavily dependent on persuasion. But many other activities involve persuasion such as a doctor persuading a patient to drink less alcohol, a road safety expert persuading drivers to not text while driving, or an online safety expert persuading users of social media sites to not reveal too much personal information online. As computing becomes involved in every sphere of life, so too is persuasion a target for applying computer-based solutions. An automated persuasion system (APS) is a system that can engage in a dialogue with a user (the persuadee) in order to persuade the persuadee to do (or not do) some action or to believe (or not believe) something. To do this, an APS aims to use convincing arguments in order to persuade the persuadee. Computational persuasion is the study of formal models of dialogues involving arguments and counterarguments, of user models, and strategies, for APSs. A promising application area for computational persuasion is in behaviour change. Within healthcare organizations, government agencies, and non-governmental agencies, there is much interest in changing behaviour of particular groups of people away from actions that are harmful to themselves and/or to others around them

    Confronting value-based argumentation frameworks with people’s assessment of argument strength

    Get PDF
    We reported a series of experiments carried out to confront the underlying intuitions of value-based argumentation frameworks (VAFs) with the intuitions of ordinary people. Our goal was twofold. On the one hand, we intended to test VAF as a descriptive theory of human argument evaluations. On the other, we aimed to gain new insights from empirical data that could serve to improve VAF as a normative model. The experiments showed that people's acceptance of arguments deviates from VAF's semantics and is rather correlated with the importance given to the promoted values, independently of the perceptions of argument interactions through attacks and defeats. Furthermore, arguments were often perceived as promoting more than one value with different relative strengths. Individuals' analyses of scenarios were also affected by external factors such as biases and arguments not explicit in the framework. Finally, we confirmed that objective acceptance, that is, the acceptance of arguments under any order of the values, was not a frequent behavior. Instead, participants tended to accept only the arguments that promoted the values they subscribe.Fil: Bodanza, Gustavo Adrian. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Investigaciones Económicas y Sociales del Sur. Universidad Nacional del Sur. Departamento de Economía. Instituto de Investigaciones Económicas y Sociales del Sur; ArgentinaFil: Freidin, Esteban. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Bahía Blanca. Instituto de Investigaciones Económicas y Sociales del Sur. Universidad Nacional del Sur. Departamento de Economía. Instituto de Investigaciones Económicas y Sociales del Sur; Argentin

    Building Jiminy Cricket: An Architecture for Moral Agreements Among Stakeholders

    Get PDF
    An autonomous system is constructed by a manufacturer, operates in a society subject to norms and laws, and is interacting with end-users. We address the challenge of how the moral values and views of all stakeholders can be integrated and reflected in the moral behaviour of the autonomous system. We propose an artificial moral agent architecture that uses techniques from normative systems and formal argumentation to reach moral agreements among stakeholders. We show how our architecture can be used not only for ethical practical reasoning and collaborative decision-making, but also for the explanation of such moral behavior

    Modelling Value-Oriented Legal Reasoning in LogiKEy

    Get PDF
    The logico-pluralist LogiKEy knowledge engineering methodology and framework is applied to the modelling of a theory of legal balancing, in which legal knowledge (cases and laws) is encoded by utilising context-dependent value preferences. The theory obtained is then used to formalise, automatically evaluate, and reconstruct illustrative property law cases (involving the appropriation of wild animals) within the Isabelle/HOL proof assistant system, illustrating how LogiKEy can harness interactive and automated theorem-proving technology to provide a testbed for the development and formal verification of legal domain-specific languages and theories. Modelling value-oriented legal reasoning in that framework, we establish novel bridges between the latest research in knowledge representation and reasoning in non-classical logics, automated theorem proving, and applications in legal reasoning

    Strategic Argumentation Dialogues for Persuasion: Framework and Experiments Based on Modelling the Beliefs and Concerns of the Persuadee

    Get PDF
    Persuasion is an important and yet complex aspect of human intelligence. When undertaken through dialogue, the deployment of good arguments, and therefore counterarguments, clearly has a significant effect on the ability to be successful in persuasion. Two key dimensions for determining whether an argument is good in a particular dialogue are the degree to which the intended audience believes the argument and counterarguments, and the impact that the argument has on the concerns of the intended audience. In this paper, we present a framework for modelling persuadees in terms of their beliefs and concerns, and for harnessing these models in optimizing the choice of move in persuasion dialogues. Our approach is based on the Monte Carlo Tree Search which allows optimization in real-time. We provide empirical results of a study with human participants showing that our automated persuasion system based on this technology is superior to a baseline system that does not take the beliefs and concerns into account in its strategy.Comment: The Data Appendix containing the arguments, argument graphs, assignment of concerns to arguments, preferences over concerns, and assignment of beliefs to arguments, is available at the link http://www0.cs.ucl.ac.uk/staff/a.hunter/papers/unistudydata.zip The code is available at https://github.com/ComputationalPersuasion/MCC

    Strategic argumentation dialogues for persuasion: Framework and experiments based on modelling the beliefs and concerns of the persuadee

    Get PDF
    Persuasion is an important and yet complex aspect of human intelligence. When undertaken through dialogue, the deployment of good arguments, and therefore counterarguments, clearly has a significant effect on the ability to be successful in persuasion. Two key dimensions for determining whether an argument is 'good' in a particular dialogue are the degree to which the intended audience believes the argument and counterarguments, and the impact that the argument has on the concerns of the intended audience. In this paper, we present a framework for modelling persuadees in terms of their beliefs and concerns, and for harnessing these models in optimizing the choice of move in persuasion dialogues. Our approach is based on the Monte Carlo Tree Search which allows optimization in real-time. We provide empirical results of a study with human participants that compares an automated persuasion system based on this technology with a baseline system that does not take the beliefs and concerns into account in its strategy

    Implementations in Machine Ethics: A Survey

    Get PDF
    Increasingly complex and autonomous systems require machine ethics to maximize the benefits and minimize the risks to society arising from the new technology. It is challenging to decide which type of ethical theory to employ and how to implement it effectively. This survey provides a threefold contribution. First, it introduces a trimorphic taxonomy to analyze machine ethics implementations with respect to their object (ethical theories), as well as their nontechnical and technical aspects. Second, an exhaustive selection and description of relevant works is presented. Third, applying the new taxonomy to the selected works, dominant research patterns, and lessons for the field are identified, and future directions for research are suggested
    corecore