13 research outputs found

    On the Multiple Roles of Ontologies in Explainable AI

    Get PDF
    This paper discusses the different roles that explicit knowledge, in particular ontologies, can play in Explainable AI and in the development of human-centric explainable systems and intelligible explanations. We consider three main perspectives in which ontologies can contribute significantly, namely reference modelling, common-sense reasoning, and knowledge refinement and complexity management. We overview some of the existing approaches in the literature, and we position them according to these three proposed perspectives. The paper concludes by discussing what challenges still need to be addressed to enable ontology-based approaches to explanation and to evaluate their human-understandability and effectiveness

    On the Multiple Roles of Ontologies in Explainable AI

    Get PDF
    This paper discusses the different roles that explicit knowledge, in particular ontologies, can play in Explainable AI and in the development of human-centric explainable systems and intelligible explanations. We consider three main perspectives in which ontologies can contribute significantly, namely reference modelling, common-sense reasoning, and knowledge refinement and complexity management. We overview some of the existing approaches in the literature, and we position them according to these three proposed perspectives. The paper concludes by discussing what challenges still need to be addressed to enable ontology-based approaches to explanation and to evaluate their human-understandability and effectiveness

    Toward Accountable and Explainable Artificial Intelligence Part one: Theory and Examples

    Get PDF
    Like other Artificial Intelligence (AI) systems, Machine Learning (ML) applications cannot explain decisions, are marred with training-caused biases, and suffer from algorithmic limitations. Their eXplainable Artificial Intelligence (XAI) capabilities are typically measured in a two-dimensional space of explainability and accuracy ignoring the accountability aspects. During system evaluations, measures of comprehensibility, predictive accuracy and accountability remain inseparable. We propose an Accountable eXplainable Artificial Intelligence (AXAI) capability framework for facilitating separation and measurement of predictive accuracy, comprehensibility and accountability. The proposed framework, in its current form, allows assessing embedded levels of AXAI for delineating ML systems in a three-dimensional space. The AXAI framework quantifies comprehensibility in terms of the readiness of users to apply the acquired knowledge and assesses predictive accuracy in terms of the ratio of test and training data, training data size and the number of false-positive inferences. For establishing a chain of responsibility, accountability is measured in terms of the inspectability of input cues, data being processed and the output information. We demonstrate applying the framework for assessing the AXAI capabilities of three ML systems. The reported work provides bases for building AXAI capability frameworks for other genres of AI systems

    Automotive Occupational Health Protection Profiles in Prevention Musculoskeletal Symptoms

    Get PDF
    This work was partly supported by science and technology foundation (FCT), under projects OPERATOR (ref. 04/SI/2019) and PREVOCUPAI (DSAIPA/AI/0105/2019), and Ph.D. grants PD/BDE/142816/2018 and PD/BDE/142973/2018.In automotive and industrial settings, occupational physicians are responsible for monitoring workers' health protection profiles. Workers' Functional Work Ability (FWA) status is used to create Occupational Health Protection Profiles (OHPP). This is a novel longitudinal study in comparison with previous research that has predominantly relied on the causality and explainability of human-understandable models for industrial technical teams like ergonomists. The application of artificial intelligence can support the decision-making to go from a worker's Functional Work Ability to explanations by integrating explainability into medical (restriction) and support in contexts of individual, work-related, and organizational risk conditions. A sample of 7857 for the prognosis part of OHPP based on Functional Work Ability in the Portuguese language in the automotive industry was taken from 2019 to 2021. The most suitable regression models to predict the next medical appointment for the workers' body parts protection were the models based on CatBoost regression, with an RMSLE of 0.84 and 1.23 weeks (mean error), respectively. CatBoost algorithm is also used to predict the next body part severity of OHPP. This information can help our understanding of potential risk factors for OHPP and identify warning signs of the early stages of musculoskeletal symptoms and work-related absenteeism.publishersversionpublishe

    Explaining Explanation, Part 4: A Deep Dive on Deep Nets

    No full text
    © 2001-2011 IEEE. This is the fourth in a series of essays about explainable AI. Previous essays laid out the theoretical and empirical foundations. This essay focuses on Deep Nets, and con-siders methods for allowing system users to generate self-explanations. This is accomplished by exploring how the Deep Net systems perform when they are operating at their boundary conditions. Inspired by recent research into adversarial examples that demonstrate the weakness-es of Deep Nets, we invert the purpose of these adversar-ial examples and argue that spoofing can be used as a tool to answer contrastive explanation questions via user-driven exploration

    Design and Evaluation of User-Centered Explanations for Machine Learning Model Predictions in Healthcare

    Get PDF
    Challenges in interpreting some high-performing models present complications in applying machine learning (ML) techniques to healthcare problems. Recently, there has been rapid growth in research on model interpretability; however, approaches to explaining complex ML models are rarely informed by end-user needs and user evaluations of model interpretability are lacking, especially in healthcare. This makes it challenging to determine what explanation approaches might enable providers to understand model predictions in a comprehensible and useful way. Therefore, I aimed to utilize clinician perspectives to inform the design of explanations for ML-based prediction tools and improve the adoption of these systems in practice. In this dissertation, I proposed a new theoretical framework for designing user-centered explanations for ML-based systems. I then utilized the framework to propose explanation designs for predictions from a pediatric in-hospital mortality risk model. I conducted focus groups with healthcare providers to obtain feedback on the proposed designs, which was used to inform the design of a user-centered explanation. The user-centered explanation was evaluated in a laboratory study to assess its effect on healthcare provider perceptions of the model and decision-making processes. The results demonstrated that the user-centered explanation design improved provider perceptions of utilizing the predictive model in practice, but exhibited no significant effect on provider accuracy, confidence, or efficiency in making decisions. Limitations of the evaluation study design, including a small sample size, may have affected the ability to detect an impact on decision-making. Nonetheless, the predictive model with the user-centered explanation was positively received by healthcare providers, and demonstrated a viable approach to explaining ML model predictions in healthcare. Future work is required to address the limitations of this study and further explore the potential benefits of user-centered explanation designs for predictive models in healthcare. This work contributes a new theoretical framework for user-centered explanation design for ML-based systems that is generalizable outside the domain of healthcare. Moreover, the work provides meaningful insights into the role of model interpretability and explanation in healthcare while advancing the discussion on how to effectively communicate ML model information to healthcare providers
    corecore