24 research outputs found

    Proof-theoretic Semantics for Intuitionistic Multiplicative Linear Logic

    Get PDF
    This work is the first exploration of proof-theoretic semantics for a substructural logic. It focuses on the base-extension semantics (B-eS) for intuitionistic multiplicative linear logic (IMLL). The starting point is a review of Sandqvist’s B-eS for intuitionistic propositional logic (IPL), for which we propose an alternative treatment of conjunction that takes the form of the generalized elimination rule for the connective. The resulting semantics is shown to be sound and complete. This motivates our main contribution, a B-eS for IMLL , in which the definitions of the logical constants all take the form of their elimination rule and for which soundness and completeness are established

    Bandler-Kohout Subproduct with Yager’s Families of Fuzzy Implications: A Comprehensive Study

    Get PDF
    Approximate reasoning schemes involving fuzzy sets are one of the best known applications of fuzzy logic in the wider sense. Fuzzy Inference Systems (FIS) or Fuzzy Inference Mechanisms (FIM) have many degrees of freedom, viz., the underlying fuzzy partition of the input and output spaces, the fuzzy logic operations employed, the fuzzification and defuzzification mechanism used, etc. This freedom gives rise to a variety of FIS with differing capabilities

    A New Universal Approximation Result For Fuzzy Systems, Which Reflects CNF-DNF Duality

    No full text
    There are two main fuzzy system methodologies for translating expert rules into a logical formula: In Mamdani's methodology, we get a DNF formula (disjunction of conjunctions), and in a methodology which uses logical implications, we get, in effect, a CNF formula (conjunction of disjunctions) . For both methodologies, universal approximation results have been proven which produce, for each approximated function f(x), two different approximating relations RDNF(x; y) and RCNF(x; y). Since in fuzzy logic, there is a known relation FCNF (x) FDNF(x) between CNF and DNF forms of a propositional formula F , it is reasonable to expect that we would be able to prove the existence of approximations for which a similar relation RCNF(x; y) RDNF(x; y) holds. Such existence is proved in our paper. 1 1 Introduction 1.1 Fuzzy control: in brief Fuzzy control (see, e.g., [9]) is a methodology that translates the expert's if-then rules of the type if A i (x) then B i (y); 1 i N; (1) or if A i1 (x..

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Automated Reasoning

    Get PDF
    This volume, LNAI 13385, constitutes the refereed proceedings of the 11th International Joint Conference on Automated Reasoning, IJCAR 2022, held in Haifa, Israel, in August 2022. The 32 full research papers and 9 short papers presented together with two invited talks were carefully reviewed and selected from 85 submissions. The papers focus on the following topics: Satisfiability, SMT Solving,Arithmetic; Calculi and Orderings; Knowledge Representation and Jutsification; Choices, Invariance, Substitutions and Formalization; Modal Logics; Proofs System and Proofs Search; Evolution, Termination and Decision Prolems. This is an open access book

    Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI

    Get PDF
    In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.Basque GovernmentConsolidated Research Group MATHMODE - Department of Education of the Basque Government IT1294-19Spanish GovernmentEuropean Commission TIN2017-89517-PBBVA Foundation through its Ayudas Fundacion BBVA a Equipos de Investigacion Cientifica 2018 call (DeepSCOP project)European Commission 82561

    Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

    Get PDF
    In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability
    corecore