515 research outputs found

    Contextual Deliberation of Cognitive Agents in Defeasible Logic

    Get PDF
    This article extends Defeasible Logic to deal with the contextual deliberation process of cognitive agents. First, we introduce meta-rules to reason with rules. Meta-rules are rules that have as a consequent rules for motivational components, such as obligations, intentions and desires. In other words, they include nested rules. Second, we introduce explicit preferences among rules. They deal with complex structures where nested rules can be involved

    Contextual Agent Deliberation in Defeasible Logic

    Get PDF
    This article extends Defeasible Logic to deal with the contextual deliberation process of cognitive agents. First, we introduce meta-rules to reason with rules. Meta-rules are rules that have as a consequent rules for motivational components, such as obligations, intentions and desires. In other words, they include nested rules. Second, we introduce explicit preferences among rules. They deal with complex structures where nested rules can be involved

    In memoriam Douglas N. Walton: the influence of Doug Walton on AI and law

    Get PDF
    Doug Walton, who died in January 2020, was a prolific author whose work in informal logic and argumentation had a profound influence on Artificial Intelligence, including Artificial Intelligence and Law. He was also very interested in interdisciplinary work, and a frequent and generous collaborator. In this paper seven leading researchers in AI and Law, all past programme chairs of the International Conference on AI and Law who have worked with him, describe his influence on their work

    Acting Upon Uncertain Beliefs

    Get PDF
    This paper discusses the conditions under which an agent is rationally permitted to leave some uncertain propositions relevant to her decision out of her deliberation. By relying on the view that belief involves a defeasible disposition to treat a proposition as true in one’s reasoning, we examine the conditions under which such a disposition can be overridden and under which an agent should take into account her uncertainty as to a proposition she believes in the course of a particular deliberation. We argue that, in some contexts, an agent can be faced with the choice of either accepting or not accepting a proposition she believes in the course of her deliberation. We provide a description of such higher-order deliberations within the framework of expected utility theory and draw conclusions regarding the phenomenon of pragmatic encroachment on knowledge

    On the Differences Between Practical and Cognitive Presumptions

    Get PDF
    The study of presumptions has intensified in argumentation theory over the last years. Although scholars put forward different accounts, they mostly agree that presumptions can be studied in deliberative and epistemic contexts, have distinct contextual functions (guiding decisions vs. acquiring information), and promote different kinds of goals (non-epistemic vs. epistemic). Accordingly, there are "practical" and "cognitive" presumptions. In this paper, I show that the differences between practical and cognitive presumptions go far beyond contextual considerations. The central aim is to explore Nicholas Rescher's contention that both types of presumptions have a closely analogous pragmatic function, i.e., that practical and cognitive presumptions are made to avoid greater harm in circumstances of epistemic uncertainty. By comparing schemes of practical and cognitive reasoning, I show that Rescher's contention requires qualifications. Moreover, not only do practical and cognitive presumptions have distinct pragmatic functions, but they also perform different dialogical functions (enabling progress vs. preventing regress) and, in some circumstances, cannot be defeated by the same kinds of evidence. Hence, I conclude that the two classes of presumptions merit distinct treatment in argumentation theory

    Logic, Reasoning, Argumentation: Insights from the Wild

    Get PDF
    This article provides a brief selective overview and discussion of recent research into natural language argumentation that may inform the study of human reasoning on the assumption that an episode of argumentation issues an invitation to accept a corresponding inference. As this research shows, arguers typically seek to establish new consequences based on prior information. And they typically do so vis-à-vis a real or an imagined opponent, or an opponent-position, in ways that remain sensitive to considerations of context, audiences, and goals. Deductively valid inferences remain a limiting case of such reasoning. In view of these insights, it may appear less surprising that allegedly “irrational” behavior can regularly be produced in experimental settings that expose subjects to standardized reasoning tasks

    Introduction to the Special Issue

    Get PDF

    Full & Partial Belief

    Get PDF
    • 

    corecore