126 research outputs found

    Intuitions and the modelling of defeasible reasoning: some case studies

    Full text link
    The purpose of this paper is to address some criticisms recently raised by John Horty in two articles against the validity of two commonly accepted defeasible reasoning patterns, viz. reinstatement and floating conclusions. I shall argue that Horty's counterexamples, although they significantly raise our understanding of these reasoning patterns, do not show their invalidity. Some of them reflect patterns which, if made explicit in the formalisation, avoid the unwanted inference without having to give up the criticised inference principles. Other examples seem to involve hidden assumptions about the specific problem which, if made explicit, are nothing but extra information that defeat the defeasible inference. These considerations will be put in a wider perspective by reflecting on the nature of defeasible reasoning principles as principles of justified acceptance rather than `real' logical inference.Comment: Proceedings of the 9th International Workshop on Non-Monotonic Reasoning (NMR'2002), Toulouse, France, April 19-21, 200

    Weakly-admissible Semantics and the Propagation of Ambiguity in Abstract Argumentation Semantics

    Get PDF
    The concept of ambiguous literals of defeasible logics is mapped to the set of undecided arguments identified by an argumentation semantics. It follows that Dung’s complete semantics are all ambiguity propagating, since the undecided status of an attacking argument is always propagated to the attacked argument, unless the latter is defeated by another accepted argument. In this paper we investigate a novel family of abstract argumentation semantics, called weakly-admissible semantics, where we do not require an acceptable argument to be necessarily defended from the attacks of undecided arguments. Weakly-admissible semantics are conflict-free, ambiguity blocking, non-admissible (in Dung’s sense), but employing a more relaxed defence-based notion of admissibility; they allow reinstatement and generate extensions that are super-sets of grounded semantics, and they at least accept credulously what Dung’s complete semantics accept at least credulously

    Reifying default reasons in justification logic

    Get PDF
    The main goal of this paper is to argue that justification logic advances the formal study of default reasons. After introducing a variant of justification logic with default reasons, we first show how the logic can be used to model undercutting attacks and exclusionary reasons. Then we compare this logic to Reiter’s default logic interpreted as an argumentation framework. The comparison is done by analyzing differences in the way in which process trees are built for the two logics

    Defeasible Logic Programming: An Argumentative Approach

    Full text link
    The work reported here introduces Defeasible Logic Programming (DeLP), a formalism that combines results of Logic Programming and Defeasible Argumentation. DeLP provides the possibility of representing information in the form of weak rules in a declarative manner, and a defeasible argumentation inference mechanism for warranting the entailed conclusions. In DeLP an argumentation formalism will be used for deciding between contradictory goals. Queries will be supported by arguments that could be defeated by other arguments. A query q will succeed when there is an argument A for q that is warranted, ie, the argument A that supports q is found undefeated by a warrant procedure that implements a dialectical analysis. The defeasible argumentation basis of DeLP allows to build applications that deal with incomplete and contradictory information in dynamic domains. Thus, the resulting approach is suitable for representing agent's knowledge and for providing an argumentation based reasoning mechanism to agents.Comment: 43 pages, to appear in the journal "Theory and Practice of Logic Programming

    Comparing and Extending the Use of Defeasible Argumentation with Quantitative Data in Real-World Contexts

    Get PDF
    Dealing with uncertain, contradicting, and ambiguous information is still a central issue in Artificial Intelligence (AI). As a result, many formalisms have been proposed or adapted so as to consider non-monotonicity. A non-monotonic formalism is one that allows the retraction of previous conclusions or claims, from premises, in light of new evidence, offering some desirable flexibility when dealing with uncertainty. Among possible options, knowledge-base, non-monotonic reasoning approaches have seen their use being increased in practice. Nonetheless, only a limited number of works and researchers have performed any sort of comparison among them. This research article focuses on evaluating the inferential capacity of defeasible argumentation, a formalism particularly envisioned for modelling non-monotonic reasoning. In addition to this, fuzzy reasoning and expert systems, extended for handling non-monotonicity of reasoning, are selected and employed as baselines, due to their vast and accepted use within the AI community. Computational trust was selected as the domain of application of such models. Trust is an ill-defined construct, hence, reasoning applied to the inference of trust can be seen as non-monotonic. Inference models were designed to assign trust scalars to editors of the Wikipedia project. Scalars assigned to recognised trustworthy editors provided the basis for the analysis of the models’ inferential capacity according to evaluation metrics from the domain of computational trust. In particular, argument-based models demonstrated more robustness than those built upon the baselines despite the knowledge bases or datasets employed. This study contributes to the body of knowledge through the exploitation of defeasible argumentation and its comparison to similar approaches. It provides publicly implementations for the designed models of inference, which might be a useful aid to scholars interested in performing non-monotonic reasoning activities. It adds to previous works, empirically enhancing the generalisability of defeasible argumentation as a compelling approach to reason with quantitative data and uncertain knowledge

    A Compact Argumentation System for Agent System Specification

    Get PDF
    We present a non-monotonic logic tailored for specifying compact autonomous agent systems. The language is a consistent instantiation of a logic based argumentation system extended with Brooks' subsumption concept and varying degree of belief. Particularly, we present a practical implementation of the language by developing a meta-encoding method that translates logical specifications into compact general logic programs. The language allows n-ary predicate literals with the usual first-order term definitions. We show that the space complexity of the resulting general logic program is linear to the size of the original theory

    Evaluating the Impact of Defeasible Argumentation as a Modelling Technique for Reasoning under Uncertainty

    Get PDF
    Limited work exists for the comparison across distinct knowledge-based approaches in Artificial Intelligence (AI) for non-monotonic reasoning, and in particular for the examination of their inferential and explanatory capacity. Non-monotonicity, or defeasibility, allows the retraction of a conclusion in the light of new information. It is a similar pattern to human reasoning, which draws conclusions in the absence of information, but allows them to be corrected once new pieces of evidence arise. Thus, this thesis focuses on a comparison of three approaches in AI for implementation of non-monotonic reasoning models of inference, namely: expert systems, fuzzy reasoning and defeasible argumentation. Three applications from the fields of decision-making in healthcare and knowledge representation and reasoning were selected from real-world contexts for evaluation: human mental workload modelling, computational trust modelling, and mortality occurrence modelling with biomarkers. The link between these applications comes from their presumptively non-monotonic nature. They present incomplete, ambiguous and retractable pieces of evidence. Hence, reasoning applied to them is likely suitable for being modelled by non-monotonic reasoning systems. An experiment was performed by exploiting six deductive knowledge bases produced with the aid of domain experts. These were coded into models built upon the selected reasoning approaches and were subsequently elicited with real-world data. The numerical inferences produced by these models were analysed according to common metrics of evaluation for each field of application. For the examination of explanatory capacity, properties such as understandability, extensibility, and post-hoc interpretability were meticulously described and qualitatively compared. Findings suggest that the variance of the inferences produced by expert systems and fuzzy reasoning models was higher, highlighting poor stability. In contrast, the variance of argument-based models was lower, showing a superior stability of its inferences across different system configurations. In addition, when compared in a context with large amounts of conflicting information, defeasible argumentation exhibited a stronger potential for conflict resolution, while presenting robust inferences. An in-depth discussion of the explanatory capacity showed how defeasible argumentation can lead to the construction of non-monotonic models with appealing properties of explainability, compared to those built with expert systems and fuzzy reasoning. The originality of this research lies in the quantification of the impact of defeasible argumentation. It illustrates the construction of an extensive number of non-monotonic reasoning models through a modular design. In addition, it exemplifies how these models can be exploited for performing non-monotonic reasoning and producing quantitative inferences in real-world applications. It contributes to the field of non-monotonic reasoning by situating defeasible argumentation among similar approaches through a novel empirical comparison
    • …
    corecore