5,248 research outputs found

    Assessing relevance

    Get PDF
    This paper advances an approach to relevance grounded on patterns of material inference called argumentation schemes, which can account for the reconstruction and the evaluation of relevance relations. In order to account for relevance in different types of dialogical contexts, pursuing also non-cognitive goals, and measuring the scalar strength of relevance, communicative acts are conceived as dialogue moves, whose coherence with the previous ones or the context is represented as the conclusion of steps of material inferences. Such inferences are described using argumentation schemes and are evaluated by considering 1) their defeasibility, and 2) the acceptability of the implicit premises on which they are based. The assessment of both the relevance of an utterance and the strength thereof depends on the evaluation of three interrelated factors: 1) number of inferential steps required; 2) the types of argumentation schemes involved; and 3) the implicit premises required

    The Jiminy Advisor: Moral Agreements Among Stakeholders Based on Norms and Argumentation

    Get PDF
    An autonomous system is constructed by a manufacturer, operates in a society subject to norms and laws, and is interacting with end users. All of these actors are stakeholders affected by the behavior of the autonomous system. We address the challenge of how the ethical views of such stakeholders can be integrated in the behavior of the autonomous system. We propose an ethical recommendation component, which we call Jiminy, that uses techniques from normative systems and formal argumentation to reach moral agreements among stakeholders. Jiminy represents the ethical views of each stakeholder by using normative systems, and has three ways of resolving moral dilemmas involving the opinions of the stakeholders. First, Jiminy considers how the arguments of the stakeholders relate to one another, which may already resolve the dilemma. Secondly, Jiminy combines the normative systems of the stakeholders such that the combined expertise of the stakeholders may resolve the dilemma. Thirdly, and only if these two other methods have failed, Jiminy uses context-sensitive rules to decide which of the stakeholders take preference. At the abstract level, these three methods are characterized by the addition of arguments, the addition of attacks among arguments, and the removal of attacks among arguments. We show how Jiminy can be used not only for ethical reasoning and collaborative decision making, but also for providing explanations about ethical behavior

    On the Differences Between Practical and Cognitive Presumptions

    Get PDF
    The study of presumptions has intensified in argumentation theory over the last years. Although scholars put forward different accounts, they mostly agree that presumptions can be studied in deliberative and epistemic contexts, have distinct contextual functions (guiding decisions vs. acquiring information), and promote different kinds of goals (non-epistemic vs. epistemic). Accordingly, there are "practical" and "cognitive" presumptions. In this paper, I show that the differences between practical and cognitive presumptions go far beyond contextual considerations. The central aim is to explore Nicholas Rescher's contention that both types of presumptions have a closely analogous pragmatic function, i.e., that practical and cognitive presumptions are made to avoid greater harm in circumstances of epistemic uncertainty. By comparing schemes of practical and cognitive reasoning, I show that Rescher's contention requires qualifications. Moreover, not only do practical and cognitive presumptions have distinct pragmatic functions, but they also perform different dialogical functions (enabling progress vs. preventing regress) and, in some circumstances, cannot be defeated by the same kinds of evidence. Hence, I conclude that the two classes of presumptions merit distinct treatment in argumentation theory

    Improving Practical Reasoning and Argumentation

    Get PDF
    This thesis justifies the need for and develops a new integrated model of practical reasoning and argumentation. After framing the work in terms of what is reasonable rather than what is rational (chapter 1), I apply the model for practical argumentation analysis and evaluation provided by Fairclough and Fairclough (2012) to a paradigm case of unreasonable individual practical argumentation provided by mass murderer Anders Behring Breivik (chapter 2). The application shows that by following the model, Breivik is relatively easily able to conclude that his reasoning to mass murder is reasonable – which is understood to be an unacceptable result. Causes for the model to allow such a conclusion are identified as conceptual confusions ingrained in the model, a tension in how values function within the model, and a lack of creativity from Breivik. Distinguishing between dialectical and dialogical, reasoning and argumentation, for individual and multiple participants, chapter 3 addresses these conceptual confusions and helps lay the foundation for the design of a new integrated model for practical reasoning and argumentation (chapter 4). After laying out the theoretical aspects of the new model, it is then used to re-test Breivik’s reasoning in light of a developed discussion regarding the motivation for the new place and role of moral considerations (chapter 5). The application of the new model shows ways that Breivik could have been able to conclude that his practical argumentation was unreasonable and is thus argued to have improved upon the Fairclough and Fairclough model. It is acknowledged, however, that since the model cannot guarantee a reasonable conclusion, improving the critical creative capacity of the individual using it is also of paramount importance (chapter 6). The thesis concludes by discussing the contemporary importance of improving practical reasoning and by pointing to areas for further research (chapter 7)

    Toward Artificial Argumentation

    Get PDF
    The field of computational models of argument is emerging as an important aspect of artificial intelligence research. The reason for this is based on the recognition that if we are to develop robust intelligent systems, then it is imperative that they can handle incomplete and inconsistent information in a way that somehow emulates the way humans tackle such a complex task. And one of the key ways that humans do this is to use argumentation - either internally, by evaluating arguments and counterarguments - or externally, by for instance entering into a discussion or debate where arguments are exchanged. As we report in this review, recent developments in the field are leading to technology for artificial argumentation, in the legal, medical, and e-government domains, and interesting tools for argument mining, for debating technologies, and for argumentation solvers are emerging
    corecore