46 research outputs found

    Extending Modular Semantics for Bipolar Weighted Argumentation (Technical Report)

    Full text link
    Weighted bipolar argumentation frameworks offer a tool for decision support and social media analysis. Arguments are evaluated by an iterative procedure that takes initial weights and attack and support relations into account. Until recently, convergence of these iterative procedures was not very well understood in cyclic graphs. Mossakowski and Neuhaus recently introduced a unification of different approaches and proved first convergence and divergence results. We build up on this work, simplify and generalize convergence results and complement them with runtime guarantees. As it turns out, there is a tradeoff between semantics' convergence guarantees and their ability to move strength values away from the initial weights. We demonstrate that divergence problems can be avoided without this tradeoff by continuizing semantics. Semantically, we extend the framework with a Duality property that assures a symmetric impact of attack and support relations. We also present a Java implementation of modular semantics and explain the practical usefulness of the theoretical ideas

    Syntactic Reasoning with Conditional Probabilities in Deductive Argumentation

    Get PDF
    Evidence from studies, such as in science or medicine, often corresponds to conditional probability statements. Furthermore, evidence can conflict, in particular when coming from multiple studies. Whilst it is natural to make sense of such evidence using arguments, there is a lack of a systematic formalism for representing and reasoning with conditional probability statements in computational argumentation. We address this shortcoming by providing a formalization of conditional probabilistic argumentation based on probabilistic conditional logic. We provide a semantics and a collection of comprehensible inference rules that give different insights into evidence. We show how arguments constructed from proofs and attacks between them can be analyzed as arguments graphs using dialectical semantics and via the epistemic approach to probabilistic argumentation. Our approach allows for a transparent and systematic way of handling uncertainty that often arises in evidence

    ProtoArgNet: Interpretable Image Classification with Super-Prototypes and Argumentation [Technical Report]

    Full text link
    We propose ProtoArgNet, a novel interpretable deep neural architecture for image classification in the spirit of prototypical-part-learning as found, e.g. in ProtoPNet. While earlier approaches associate every class with multiple prototypical-parts, ProtoArgNet uses super-prototypes that combine prototypical-parts into single prototypical class representations. Furthermore, while earlier approaches use interpretable classification layers, e.g. logistic regression in ProtoPNet, ProtoArgNet improves accuracy with multi-layer perceptrons while relying upon an interpretable reading thereof based on a form of argumentation. ProtoArgNet is customisable to user cognitive requirements by a process of sparsification of the multi-layer perceptron/argumentation component. Also, as opposed to other prototypical-part-learning approaches, ProtoArgNet can recognise spatial relations between different prototypical-parts that are from different regions in images, similar to how CNNs capture relations between patterns recognized in earlier layers

    Explaining Random Forests using Bipolar Argumentation and Markov Networks (Technical Report)

    Full text link
    Random forests are decision tree ensembles that can be used to solve a variety of machine learning problems. However, as the number of trees and their individual size can be large, their decision making process is often incomprehensible. In order to reason about the decision process, we propose representing it as an argumentation problem. We generalize sufficient and necessary argumentative explanations using a Markov network encoding, discuss the relevance of these explanations and establish relationships to families of abductive explanations from the literature. As the complexity of the explanation problems is high, we discuss a probabilistic approximation algorithm and present first experimental results.Comment: Accepted for presentation at AAAI 2023. Contains appendix with proofs and additional details about experiment
    corecore