1 research outputs found

    Explainable Fuzzy Systems: Paving the Way from Interpretable Fuzzy Systems to Explainable AI Systems

    No full text
    This book has been written during a challenging journey lasted for more than ten years. In 2005, Jose M. Alonso, still a PhD student under supervision of Luis Magdalena, attended for the first time the conference of the EUropean Society for Fuzzy Logic And Technology (EUSFLAT) in Barcelona. He presented the papers ``A simplification process of linguistic knowledge bases'' and ``Integrating induced knowledge in an expert fuzzy-based system for intelligent motion analysis on ground robots'', which will be later an important part of his PhD thesis. % In addition, he met Corrado Mencar who presented the work ``Some fundamental interpretability issues in fuzzy modeling''. This meeting was the beginning of a nice friendship and fruitful collaboration. In the following years, Jose M. Alonso happened to stay in Bari twice (2010 and 2017) as a visiting researcher at the Universit`{a} degli Studi di Bari Aldo Moro. In those occasions, a collaboration was first established and then strengthened with Ciro Castiello. Indeed, the four authors of this book have been doing research, mostly together, on interpretability of fuzzy systems (and beyond) for more than fifteen years, and have published several conference and journal papers, as well as organized a number of conference events (special sessions, tutorials, panels, etc.). Moreover, the four authors are all members of the Task Force (TF) on EXplainable Fuzzy Systems (EXFS) which is under the umbrella of the Fuzzy Systems Technical Committee (FSTC) in the Computational Intelligence Society (CIS) of the Institute of Electrical and Electronics Engineers (IEEE). Actually, Corrado Mencar and Jose M. Alonso are the chairs (and founders in 2020) of the IEEE-CIS TF-EXFS which has the final goal of paving the way from interpretable fuzzy systems towards eXplainable Artificial Intelligence (eXplainable AI, or XAI for short). The very first idea of writing this book dates back to 2010. The demand for interpretable models was receiving a growing echo even outside the academic buildings. Just a few years before had started a severe worldwide financial crisis which urged analysts to argue against decisions made by global credit rating agencies on the basis of black-box models which no one could understand. At the time, the authors' concern was mostly related to designing interpretable fuzzy models that might replace black-box models in transparent decision-making support systems. It was 2016 when the focus was attuned to creating a bridge between interpretable fuzzy systems and XAI. In that year, the USA Defense Advanced Research Projects Agency (DARPA) launched the first challenge for a new generation of XAI systems remarking that ``even though current AI systems offer many benefits in many applications, their effectiveness is limited by a lack of explanation ability when interacting with humans''. % % % % % % In addition, the European General Data Protection Regulation (GDPR) was approved in 2016 and became effective since May 2018, remarking that European citizens have the ``right to an explanation'' of decisions affecting them, no matter who (or what AI system) makes such decision. Since 2018, there is an increasing worldwide interest in XAI. It is worth noting that about 30% of publications in Scopus (before October 2017) concerning XAI came from authors well recognized in the community of researchers in Fuzzy Logic. Most of these publications pay attention either to characterizing interpretability issues of fuzzy systems, or designing interpretable fuzzy systems, or building fuzzy systems with a good balance between interpretability and accuracy. EXFS go a step ahead of interpretable fuzzy systems. They benefit of interpretable fuzzy-grounded knowledge representation and reasoning while enhancing also human-machine interaction through multi-modal (e.g., graphical or textual modalities) effective explanations. Given the multifaceted nature of explainability, there are explanations which are factual, counterfactual, contrastive, task-oriented, etc. Accordingly, designing EXFS is a matter of careful human-centered design which goes beyond the usual topics treated by the community of researchers in Fuzzy Logic. Thus, the multidisciplinary field of XAI demands researchers, from both academy and industry, with a holistic view of fundamentals and current research trends in the field of Computational Intelligence (with special attention to Fuzzy Logic but also addressing XAI challenges on Neural Networks, Evolutionary Computation, Bayesian Networks, Bio-inspired algorithms, etc.), as well as researchers with complementary background on Cognitive and Social Science, Neuroscience, Computational Linguistics, Human-machine Interaction, etc. % The book is structured so as to give a gentle introduction on the key concepts related to EXFS, with a special focus on Fuzzy Rule-Based Systems (FRBSs). Chap.~ ef{intro} provides the reader with some general ideas related to XAI and motivates the adoption of Granular Computing in general, and Fuzzy Logic in particular, as key methodologies for XAI. Since the focus of the book is on FRBSs, Chap.~ ef{genconcepts} is devoted to introduce the fundamental concepts, definitions and notation related to Fuzzy Set Theory and Fuzzy Systems. A necessary condition for explainability in fuzzy systems is the interpretability of the knowledge base. Interpretability is multifaceted, subjective and blur in its nature. To give a computational machinery for designing interpretable fuzzy systems, a constraint-based approach is generally used, whereas interpretability is characterized by a number of criteria. For such reasons, Chap.~ ef{cons-criteria} is entirely focused on defining such constraints and criteria, which are organized in accordance to the hierarchy that is usually established in an FRBS (fuzzy sets, linguistic variables, information granules, rules, model, model adaptation). % Chap.~ ef{assess} describes indexes for interpretability assessment, which can be categorized in two main classes: structural-based interpretability indexes, which pertain to the symbolic representation of the knowledge base (linguistic terms, rules, etc.), and semantic-based interpretability indexes related to the functional definition of the symbolic structures. Having a toolbox of interpretability constraints and criteria along with their assessment indexes is not enough to ensure interpretability in fuzzy systems. To this end, specific design approaches are required, which are described in Chap.~ ef{ch:design}. The design process requires several decision steps and tasks, which have been divided in design of the knowledge base and design of the processing structure. Noticeably, the role of the designer is crucial in making the appropriate choices for balancing the required level of interpretability with the required precision of the resulting system. Chap.~ ef{usecase} reports a case study of design and validation of an explainable FRBS, which combines expert knowledge and knowledge automatically extracted from data, on a real-world problem. Interpretability is achieved by using a specific methodology---HILK which stands for Highly Interpretable Linguistic Knowledge--- and related tool---GUAJE which stands for Generating Understandable and Accurate fuzzy systems in a Java Environment---. They gather all the concepts described in the previous chapters to help the designer in producing interpretable fuzzy systems. Then, explainability is achieved through the Linguistic Description of Complex Phenomena (LDCP) methodology for generating explanations in natural language of the inference provided by such fuzzy systems. Finally, Chap.~ ef{concs} gives an outlook on the evolution from interpretable fuzzy systems to EXFS and their key role in XAI. % The book is intended for researchers and practitioners who wish to explore the main ideas on EXFS, which are presented in a comprehensive and homogeneous way. The researchers in the theory of interpretable fuzzy systems will find information on the state-of-art on interpretability, as well as many hints on future developments and open challenges. The researchers interested in applying EXFS in real-world problems will find a thorough description of the methodological processes required to design interpretable fuzzy systems as well as the explanatory tools for generating visualizations and natural language descriptions of the outcome of such systems. The book also offers a guide for practitioners who want to build intelligent solutions based on EXFS by using the available software tools and libraries. % EXFS can play a central role in XAI but there is still a lot of work to do in the race for building fully self-explaining machines. We hope both theorists and practitioners find this book useful
    corecore