127 research outputs found

    A Plausibility Semantics for Abstract Argumentation Frameworks

    Get PDF
    We propose and investigate a simple ranking-measure-based extension semantics for abstract argumentation frameworks based on their generic instantiation by default knowledge bases and the ranking construction semantics for default reasoning. In this context, we consider the path from structured to logical to shallow semantic instantiations. The resulting well-justified JZ-extension semantics diverges from more traditional approaches.Comment: Proceedings of the 15th International Workshop on Non-Monotonic Reasoning (NMR 2014). This is an improved and extended version of the author's ECSQARU 2013 pape

    Syntactic Reasoning with Conditional Probabilities in Deductive Argumentation

    Get PDF
    Evidence from studies, such as in science or medicine, often corresponds to conditional probability statements. Furthermore, evidence can conflict, in particular when coming from multiple studies. Whilst it is natural to make sense of such evidence using arguments, there is a lack of a systematic formalism for representing and reasoning with conditional probability statements in computational argumentation. We address this shortcoming by providing a formalization of conditional probabilistic argumentation based on probabilistic conditional logic. We provide a semantics and a collection of comprehensible inference rules that give different insights into evidence. We show how arguments constructed from proofs and attacks between them can be analyzed as arguments graphs using dialectical semantics and via the epistemic approach to probabilistic argumentation. Our approach allows for a transparent and systematic way of handling uncertainty that often arises in evidence

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas

    Syntactic reasoning with conditional probabilities in deductive argumentation

    Get PDF
    Evidence from studies, such as in science or medicine, often corresponds to conditional probability statements. Furthermore, evidence can conflict, in particular when coming from multiple studies. Whilst it is natural to make sense of such evidence using arguments, there is a lack of a systematic formalism for representing and reasoning with conditional probability statements in computational argumentation. We address this shortcoming by providing a formalization of conditional probabilistic argumentation based on probabilistic conditional logic. We provide a semantics and a collection of comprehensible inference rules that give different insights into evidence. We show how arguments constructed from proofs and attacks between them can be analyzed as arguments graphs using dialectical semantics and via the epistemic approach to probabilistic argumentation. Our approach allows for a transparent and systematic way of handling uncertainty that often arises in evidence

    Interpretation of Natural-language Robot Instructions: Probabilistic Knowledge Representation, Learning, and Reasoning

    Get PDF
    A robot that can be simply told in natural language what to do -- this has been one of the ultimate long-standing goals in both Artificial Intelligence and Robotics research. In near-future applications, robotic assistants and companions will have to understand and perform commands such as set the table for dinner'', make pancakes for breakfast'', or cut the pizza into 8 pieces.'' Although such instructions are only vaguely formulated, complex sequences of sophisticated and accurate manipulation activities need to be carried out in order to accomplish the respective tasks. The acquisition of knowledge about how to perform these activities from huge collections of natural-language instructions from the Internet has garnered a lot of attention within the last decade. However, natural language is typically massively unspecific, incomplete, ambiguous and vague and thus requires powerful means for interpretation. This work presents PRAC -- Probabilistic Action Cores -- an interpreter for natural-language instructions which is able to resolve vagueness and ambiguity in natural language and infer missing information pieces that are required to render an instruction executable by a robot. To this end, PRAC formulates the problem of instruction interpretation as a reasoning problem in first-order probabilistic knowledge bases. In particular, the system uses Markov logic networks as a carrier formalism for encoding uncertain knowledge. A novel framework for reasoning about unmodeled symbolic concepts is introduced, which incorporates ontological knowledge from taxonomies and exploits semantically similar relational structures in a domain of discourse. The resulting reasoning framework thus enables more compact representations of knowledge and exhibits strong generalization performance when being learnt from very sparse data. Furthermore, a novel approach for completing directives is presented, which applies semantic analogical reasoning to transfer knowledge collected from thousands of natural-language instruction sheets to new situations. In addition, a cohesive processing pipeline is described that transforms vague and incomplete task formulations into sequences of formally specified robot plans. The system is connected to a plan executive that is able to execute the computed plans in a simulator. Experiments conducted in a publicly accessible, browser-based web interface showcase that PRAC is capable of closing the loop from natural-language instructions to their execution by a robot

    A Stalnakerian Analysis of Metafictive Statements

    Get PDF

    Modeling, Quantifying, and Limiting Adversary Knowledge

    Get PDF
    Users participating in online services are required to relinquish control over potentially sensitive personal information, exposing them to intentional or unintentional miss-use of said information by the service providers. Users wishing to avoid this must either abstain from often extremely useful services, or provide false information which is usually contrary to the terms of service they must abide by. An attractive middle-ground alternative is to maintain control in the hands of the users and provide a mechanism with which information that is necessary for useful services can be queried. Users need not trust any external party in the management of their information but are now faced with the problem of judging when queries by service providers should be answered or when they should be refused due to revealing too much sensitive information. Judging query safety is difficult. Two queries may be benign in isolation but might reveal more than a user is comfortable with in combination. Additionally malicious adversaries who wish to learn more than allowed might query in a manner that attempts to hide the flows of sensitive information. Finally, users cannot rely on human inspection of queries due to its volume and the general lack of expertise. This thesis tackles the automation of query judgment, giving the self-reliant user a means with which to discern benign queries from dangerous or exploitive ones. The approach is based on explicit modeling and tracking of the knowledge of adversaries as they learn about a user through the queries they are allowed to observe. The approach quantifies the absolute risk a user is exposed, taking into account all the information that has been revealed already when determining to answer a query. Proposed techniques for approximate but sound probabilistic inference are used to tackle the tractability of the approach, letting the user tradeoff utility (in terms of the queries judged safe) and efficiency (in terms of the expense of knowledge tracking), while maintaining the guarantee that risk to the user is never underestimated. We apply the approach to settings where user data changes over time and settings where multiple users wish to pool their data to perform useful collaborative computations without revealing too much information. By addressing one of the major obstacles preventing the viability of personal information control, this work brings the attractive proposition closer to reality
    • …
    corecore