446 research outputs found

    A Framework for Combining Defeasible Argumentation with Labeled Deduction

    Full text link
    In the last years, there has been an increasing demand of a variety of logical systems, prompted mostly by applications of logic in AI and other related areas. Labeled Deductive Systems (LDS) were developed as a flexible methodology to formalize such a kind of complex logical systems. Defeasible argumentation has proven to be a successful approach to formalizing commonsense reasoning, encompassing many other alternative formalisms for defeasible reasoning. Argument-based frameworks share some common notions (such as the concept of argument, defeater, etc.) along with a number of particular features which make it difficult to compare them with each other from a logical viewpoint. This paper introduces LDSar, a LDS for defeasible argumentation in which many important issues concerning defeasible argumentation are captured within a unified logical framework. We also discuss some logical properties and extensions that emerge from the proposed framework.Comment: 15 pages, presented at CMSRA Workshop 2003. Buenos Aires, Argentin

    Introducing probabilistic reasoning in defeasible argumentation using labeled deductive systems

    Get PDF
    LabeledDeductive Systems (LDS)were developed as a rigorous but fexiblemethodology to formalize complex logical systems, such as temporal logics, database query languages and defeasible reasoning systems. LDSAR is a LDS-based framework for defeasible argumentation which subsumes diferent existing argumentation frameworks, providing a testbed for studying diferent relevant features (such as emerging logical properties, ontological aspects, semantic characterization, etc.) This paper discusses some relevant issues concerning the introduction of probabilistic reasoning into defeasible argumentation. In particular, we consider a first approach for recasting the existing LDSAR framework in order to incorporate numeric attributes (certainty factors) as part of the argumentation process.Eje: Aspectos teóricos de inteligencia artificialRed de Universidades con Carreras en Informática (RedUNCI

    Logic-based Technologies for Intelligent Systems: State of the Art and Perspectives

    Get PDF
    Together with the disruptive development of modern sub-symbolic approaches to artificial intelligence (AI), symbolic approaches to classical AI are re-gaining momentum, as more and more researchers exploit their potential to make AI more comprehensible, explainable, and therefore trustworthy. Since logic-based approaches lay at the core of symbolic AI, summarizing their state of the art is of paramount importance now more than ever, in order to identify trends, benefits, key features, gaps, and limitations of the techniques proposed so far, as well as to identify promising research perspectives. Along this line, this paper provides an overview of logic-based approaches and technologies by sketching their evolution and pointing out their main application areas. Future perspectives for exploitation of logic-based technologies are discussed as well, in order to identify those research fields that deserve more attention, considering the areas that already exploit logic-based approaches as well as those that are more likely to adopt logic-based approaches in the future

    Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Support

    Full text link
    A framework and methodology---termed LogiKEy---for the design and engineering of ethical reasoners, normative theories and deontic logics is presented. The overall motivation is the development of suitable means for the control and governance of intelligent autonomous systems. LogiKEy's unifying formal framework is based on semantical embeddings of deontic logics, logic combinations and ethico-legal domain theories in expressive classic higher-order logic (HOL). This meta-logical approach enables the provision of powerful tool support in LogiKEy: off-the-shelf theorem provers and model finders for HOL are assisting the LogiKEy designer of ethical intelligent agents to flexibly experiment with underlying logics and their combinations, with ethico-legal domain theories, and with concrete examples---all at the same time. Continuous improvements of these off-the-shelf provers, without further ado, leverage the reasoning performance in LogiKEy. Case studies, in which the LogiKEy framework and methodology has been applied and tested, give evidence that HOL's undecidability often does not hinder efficient experimentation.Comment: 50 pages; 10 figure

    Combining quantitative and qualitative reasoning in defeasible argumentation

    Get PDF
    Labeled Deductive Systems (LDS) were developed as a rigorous but exible method- ology to formalize complex logical systems, such as temporal logics, database query languages and defeasible reasoning systems. LDSAR is a LDS-based framework for defeasible argumentation which subsumes di erent existing argumentation frameworks, providing a testbed for the study of dif- ferent relevant features (such as logical properties and ontological aspects, among others). This paper presents LDS AR, an extension of LDSAR that incorporates the ability to combine quantitative and qualitative features within a uni ed argumentative setting. Our approach involves the assignment of certainty factors to formulas in the knowl- edge base. These values are propagated when performing argumentative inference, o ering an alternative source of information for evaluating the strength of arguments in the dialectical analysis. We will also discuss some emerging logical properties of the resulting framework.Eje: Lógica e Inteligencia artificialRed de Universidades con Carreras en Informática (RedUNCI

    Combining quantitative and qualitative reasoning in defeasible argumentation

    Get PDF
    Labeled Deductive Systems (LDS) were developed as a rigorous but exible method- ology to formalize complex logical systems, such as temporal logics, database query languages and defeasible reasoning systems. LDSAR is a LDS-based framework for defeasible argumentation which subsumes di erent existing argumentation frameworks, providing a testbed for the study of dif- ferent relevant features (such as logical properties and ontological aspects, among others). This paper presents LDS AR, an extension of LDSAR that incorporates the ability to combine quantitative and qualitative features within a uni ed argumentative setting. Our approach involves the assignment of certainty factors to formulas in the knowl- edge base. These values are propagated when performing argumentative inference, o ering an alternative source of information for evaluating the strength of arguments in the dialectical analysis. We will also discuss some emerging logical properties of the resulting framework.Eje: Lógica e Inteligencia artificialRed de Universidades con Carreras en Informática (RedUNCI

    07122 Abstracts Collection -- Normative Multi-agent Systems

    Get PDF
    From 18.03.07 to 23.03.07, the Dagstuhl Seminar 07122 ``Normative Multi-agent Systems\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Examining the Modelling Capabilities of Defeasible Argumentation and non-Monotonic Fuzzy Reasoning

    Get PDF
    Knowledge-representation and reasoning methods have been extensively researched within Artificial Intelligence. Among these, argumentation has emerged as an ideal paradigm for inference under uncertainty with conflicting knowledge. Its value has been predominantly demonstrated via analyses of the topological structure of graphs of arguments and its formal properties. However, limited research exists on the examination and comparison of its inferential capacity in real-world modelling tasks and against other knowledge-representation and non-monotonic reasoning methods. This study is focused on a novel comparison between defeasible argumentation and non-monotonic fuzzy reasoning when applied to the representation of the ill-defined construct of human mental workload and its assessment. Different argument-based and non-monotonic fuzzy reasoning models have been designed considering knowledge-bases of incremental complexity containing uncertain and conflicting information provided by a human reasoner. Findings showed how their inferences have a moderate convergent and face validity when compared respectively to those of an existing baseline instrument for mental workload assessment, and to a perception of mental workload self-reported by human participants. This confirmed how these models also reasonably represent the construct under consideration. Furthermore, argument-based models had on average a lower mean squared error against the self-reported perception of mental workload when compared to fuzzy-reasoning models and the baseline instrument. The contribution of this research is to provide scholars, interested in formalisms on knowledge-representation and non-monotonic reasoning, with a novel approach for empirically comparing their inferential capacity

    Evaluating the Impact of Defeasible Argumentation as a Modelling Technique for Reasoning under Uncertainty

    Get PDF
    Limited work exists for the comparison across distinct knowledge-based approaches in Artificial Intelligence (AI) for non-monotonic reasoning, and in particular for the examination of their inferential and explanatory capacity. Non-monotonicity, or defeasibility, allows the retraction of a conclusion in the light of new information. It is a similar pattern to human reasoning, which draws conclusions in the absence of information, but allows them to be corrected once new pieces of evidence arise. Thus, this thesis focuses on a comparison of three approaches in AI for implementation of non-monotonic reasoning models of inference, namely: expert systems, fuzzy reasoning and defeasible argumentation. Three applications from the fields of decision-making in healthcare and knowledge representation and reasoning were selected from real-world contexts for evaluation: human mental workload modelling, computational trust modelling, and mortality occurrence modelling with biomarkers. The link between these applications comes from their presumptively non-monotonic nature. They present incomplete, ambiguous and retractable pieces of evidence. Hence, reasoning applied to them is likely suitable for being modelled by non-monotonic reasoning systems. An experiment was performed by exploiting six deductive knowledge bases produced with the aid of domain experts. These were coded into models built upon the selected reasoning approaches and were subsequently elicited with real-world data. The numerical inferences produced by these models were analysed according to common metrics of evaluation for each field of application. For the examination of explanatory capacity, properties such as understandability, extensibility, and post-hoc interpretability were meticulously described and qualitatively compared. Findings suggest that the variance of the inferences produced by expert systems and fuzzy reasoning models was higher, highlighting poor stability. In contrast, the variance of argument-based models was lower, showing a superior stability of its inferences across different system configurations. In addition, when compared in a context with large amounts of conflicting information, defeasible argumentation exhibited a stronger potential for conflict resolution, while presenting robust inferences. An in-depth discussion of the explanatory capacity showed how defeasible argumentation can lead to the construction of non-monotonic models with appealing properties of explainability, compared to those built with expert systems and fuzzy reasoning. The originality of this research lies in the quantification of the impact of defeasible argumentation. It illustrates the construction of an extensive number of non-monotonic reasoning models through a modular design. In addition, it exemplifies how these models can be exploited for performing non-monotonic reasoning and producing quantitative inferences in real-world applications. It contributes to the field of non-monotonic reasoning by situating defeasible argumentation among similar approaches through a novel empirical comparison

    Legal compliance by design (LCbD) and through design (LCtD) : preliminary survey

    Get PDF
    1st Workshop on Technologies for Regulatory Compliance co-located with the 30th International Conference on Legal Knowledge and Information Systems (JURIX 2017). The purpose of this paper is twofold: (i) carrying out a preliminary survey of the literature and research projects on Compliance by Design (CbD); and (ii) clarifying the double process of (a) extending business managing techniques to other regulatory fields, and (b) converging trends in legal theory, legal technology and Artificial Intelligence. The paper highlights the connections and differences we found across different domains and proposals. We distinguish three different policydriven types of CbD: (i) business, (ii) regulatory, (iii) and legal. The recent deployment of ethical views, and the implementation of general principles of privacy and data protection lead to the conclusion that, in order to appropriately define legal compliance, Compliance through Design (CtD) should be differentiated from CbD
    corecore