1,430 research outputs found

    What does explainable AI explain?

    Get PDF
    Machine Learning (ML) models are increasingly used in industry, as well as in scientific research and social contexts. Unfortunately, ML models provide only partial solutions to real-world problems, focusing on predictive performance in static environments. Problem aspects beyond prediction, such as robustness in employment, knowledge generation in science, or providing recourse recommendations to end-users, cannot be directly tackled with ML models. Explainable Artificial Intelligence (XAI) aims to solve, or at least highlight, problem aspects beyond predictive performance through explanations. However, the field is still in its infancy, as fundamental questions such as “What are explanations?”, “What constitutes a good explanation?”, or “How relate explanation and understanding?” remain open. In this dissertation, I combine philosophical conceptual analysis and mathematical formalization to clarify a prerequisite of these difficult questions, namely what XAI explains: I point out that XAI explanations are either associative or causal and either aim to explain the ML model or the modeled phenomenon. The thesis is a collection of five individual research papers that all aim to clarify how different problems in XAI are related to these different “whats”. In Paper I, my co-authors and I illustrate how to construct XAI methods for inferring associational phenomenon relationships. Paper II directly relates to the first; we formally show how to quantify uncertainty of such scientific inferences for two XAI methods – partial dependence plots (PDP) and permutation feature importance (PFI). Paper III discusses the relationship between counterfactual explanations and adversarial examples; I argue that adversarial examples can be described as counterfactual explanations that alter the prediction but not the underlying target variable. In Paper IV, my co-authors and I argue that algorithmic recourse recommendations should help data-subjects improve their qualification rather than to game the predictor. In Paper V, we address general problems with model agnostic XAI methods and identify possible solutions

    Examining Different Reasons Why People Accept Or Reject Scientific Claims

    Get PDF
    The current project was designed to examine how cognitive style, cultural worldview, and conspiracy ideation correspond to various levels of agreement with scientific claims. Additionally, the kinds of justifications people provide for their position on scientific issues and the kinds of possible refutations of their scientific beliefs people are able to generate were qualitatively coded and analyzed. Participants were presented with a short survey asking about their level of agreement with scientific claims about biological evolution, anthropogenic climate change, pediatric vaccines, and genetically modified foods. Participants were asked two open-ended questions about each topic, one prompting participants to justify their level-of-agreement rating and the other prompting participants to generate possible refutations to their belief. Participants also filled in measures of cognitive style, cultural worldview, and conspiracy ideation. I predicted that analytical thinking style would be associated with overall higher levels of agreement with scientific claims, intuitive thinking and conspiracy ideation would be associated with overall lower levels of agreement with scientific claims, and agreement with scientific claims would be a function of cultural worldview. Results showed that greater agreement with all four scientific claims is related to a greater predisposition to analytical thinking and stronger self-reported political liberalism. I further hypothesized that the frequency of distinct categories of justifications and refutations would be predicted by level of agreement with scientific claims. Broadly, justifications were coded as non-justifications, subjective, evidential, or deferential, and refutations were broadly coded as denials, subjective, evidential, or deferential. Results of chi-squared analysis revealed topic-specific patterns in participants’ reasoning, suggesting that people do not reason about scientific topics consistently. Different scientific claims appear, instead, to be accepted or rejected for different reasons. For example, evidence may be cited for one socio-scientific issue, but subjective experience or reasoning may be used to justify others. Regression analyses indicated further the nuanced relationship between explicit reasoning provided by participants and their agreement with scientific claims. Higher agreement with all scientific claims was related to a greater frequency of explicitly referencing evidence in some form, but other categories of belief justification and belief refutation showed topic-specific relationships. Generally, findings from this research provide a crucial next step for better understanding why individuals reject established science, as well as for developing more effective means of improving scientific literacy

    Explainable deep learning in plant phenotyping

    Get PDF
    The increasing human population and variable weather conditions, due to climate change, pose a threat to the world's food security. To improve global food security, we need to provide breeders with tools to develop crop cultivars that are more resilient to extreme weather conditions and provide growers with tools to more effectively manage biotic and abiotic stresses in their crops. Plant phenotyping, the measurement of a plant's structural and functional characteristics, has the potential to inform, improve and accelerate both breeders' selections and growers' management decisions. To improve the speed, reliability and scale of plant phenotyping procedures, many researchers have adopted deep learning methods to estimate phenotypic information from images of plants and crops. Despite the successful results of these image-based phenotyping studies, the representations learned by deep learning models remain difficult to interpret, understand, and explain. For this reason, deep learning models are still considered to be black boxes. Explainable AI (XAI) is a promising approach for opening the deep learning model's black box and providing plant scientists with image-based phenotypic information that is interpretable and trustworthy. Although various fields of study have adopted XAI to advance their understanding of deep learning models, it has yet to be well-studied in the context of plant phenotyping research. In this review article, we reviewed existing XAI studies in plant shoot phenotyping, as well as related domains, to help plant researchers understand the benefits of XAI and make it easier for them to integrate XAI into their future studies. An elucidation of the representations within a deep learning model can help researchers explain the model's decisions, relate the features detected by the model to the underlying plant physiology, and enhance the trustworthiness of image-based phenotypic information used in food production systems

    The Relationships Among Cognitive, Spiritual, and Wisdom Development in Adults

    Get PDF
    This study explored whether (1) adult cognitive development correlates with spiritual development, (2) wisdom development mediates the relationship, and (3) age, gender, education level, socioeconomic status, or religious denomination are associated with level of cognitive, wisdom, or spiritual development. University students and alumni (N = 134) completed a demographic questionnaire, the Model of Hierarchical Complexity Helper-Person Problem (Commons & Pekkar, 2004), the Spiritual Assessment Inventory (Hall & Edwards, 1996, 2002), and the Self-Assessed Wisdom Scale (Webster, 2003). This study hypothesized that wisdom, understood to derive from both personality qualities and life experience, mediates the influence of cognitive development on spiritual. This research hoped to provide empirical support for understanding the direction and degree of influence of cognitive, wisdom, and spiritual development. Using structural equation modeling, spiritual development was measured only as awareness of God. Cognitive development correlated significantly with spiritual awareness with moderate effect size. An inverse relationship was found between wisdom development and spiritual awareness. Wisdom development did not mediate the impact of cognitive development on spiritual awareness. Gender, age, education level, socioeconomic status, and religious affiliation were not associated with cognitive, wisdom, or spiritual developmen

    A Multidisciplinary Design and Evaluation Framework for Explainable AI Systems

    Get PDF
    Nowadays, algorithms analyze user data and affect the decision-making process for millions of people on matters like employment, insurance and loan rates, and even criminal justice. However, these algorithms that serve critical roles in many industries have their own biases that can result in discrimination and unfair decision-making. Explainable Artificial Intelligence (XAI) systems can be a solution to predictable and accountable AI by explaining AI decision-making processes for end users and therefore increase user awareness and prevent bias and discrimination. The broad spectrum of research on XAI, including designing interpretable models, explainable user interfaces, and human-subject studies of XAI systems are sought in different disciplines such as machine learning, human-computer interactions (HCI), and visual analytics. The mismatch in objectives for the scholars to define, design, and evaluate the concept of XAI may slow down the overall advances of end-to-end XAI systems. My research aims to converge knowledge behind design and evaluation of XAI systems between multiple disciplines to further support key benefits of algorithmic transparency and interpretability. To this end, I propose a comprehensive design and evaluation framework for XAI systems with step-by-step guidelines to pair different design goals with their evaluation methods for iterative system design cycles in multidisciplinary teams. This dissertation presents a comprehensive XAI design and evaluation framework to provide guidance for different design goals and evaluation approaches in XAI systems. After a thorough review of XAI research in the fields of machine learning, visualization, and HCI, I present a categorization of XAI design goals and evaluation methods and show a mapping between design goals for different XAI user groups and their evaluation methods. From my findings, I present a design and evaluation framework for XAI systems (Objective 1) to address the relation between different system design needs. The framework provides recommendations for different goals and ready-to-use tables of evaluation methods for XAI systems. The importance of this framework is in providing guidance for researchers on different aspects of XAI system design in multidisciplinary team efforts. Then, I demonstrate and validate the proposed framework (Objective 2) through one end-to-end XAI system case study and two examples by analysis of previous XAI systems in terms of our framework. I present two contributions to my XAI design and evaluation framework to improve evaluation methods for XAI system

    Melanoma Recognition by Fusing Convolutional Blocks and Dynamic Routing between Capsules

    Get PDF
    Skin cancer is one of the most common types of cancers in the world, with melanoma being the most lethal form. Automatic melanoma diagnosis from skin images has recently gained attention within the machine learning community, due to the complexity involved. In the past few years, convolutional neural network models have been commonly used to approach this issue. This type of model, however, presents disadvantages that sometimes hamper its application in real-world situations, e.g., the construction of transformation-invariant models and their inability to consider spatial hierarchies between entities within an image. Recently, Dynamic Routing between Capsules architecture (CapsNet) has been proposed to overcome such limitations. This work is aimed at proposing a new architecture which combines convolutional blocks with a customized CapsNet architecture, allowing for the extraction of richer abstract features. This architecture uses high-quality 299Ă—299Ă—3 skin lesion images, and a hyper-tuning of the main parameters is performed in order to ensure effective learning under limited training data. An extensive experimental study on eleven image datasets was conducted where the proposal significantly outperformed several state-of-the-art models. Finally, predictions made by the model were validated through the application of two modern model-agnostic interpretation tools

    Physics-guided machine learning approaches to predict stability properties of fusion plasmas

    Get PDF
    Disruption prediction and avoidance is a critical need for next-step tokamaks such as the International Thermonuclear Experimental Reactor (ITER). The Disruption Event Characterization and Forecasting Code (DECAF) is a framework used to fully determine chains of events, such as magnetohydrodynamic (MHD) instabilities, that can lead to disruptions. In this thesis, several interpretable and physics-guided machine learning techniques (ML) to forecast the onset of resistive wall modes (RWM) in spherical tokamaks have been developed and incorporated into DECAF. The new DECAF model operates in a multi-step fashion by analysing the ideal stability properties and then by including kinetic effects on RWM stability. First, a random forest regressor (RFR) and a neural network (NN) ensemble are employed to reproduce the change in plasma potential energy without wall effects, δWno-wall, computed by the DCON ideal stability code for a large database of equilibria from the National Spherical Torus Experiment (NSTX). Moreover, outputs from the ML models are reduced and manipulated to get an estimation of the no-wall β limit, βno-wall, (where β is the ratio of plasma pressure to magnetic confinement field pressure). This exercise shows that the ML models are able to improve previous DECAF characterisation of stable and unstable equilibria and achieve accuracies within 85-88%, depending on the chosen level of interpretability. The physics guidance imposed on the NN objective function allowed for transferability outside the training domain by testing the algorithm on discharges from the Mega Ampere Spherical Tokamak (MAST). The estimated βno-wall and other important plasma characteristics, such as rotation, collisionality and low frequency MHD activity, are used as input to a customised random forest (RF) classifier to predict RWM stability for a set of human-labeled NSTX discharges. The proposed approach is real-time compatible and outperforms classical cost-sensitive methods by achieving a true positive rate (TPR) up to 90%, while also resulting in a threefold reduction in the training time. Finally, a model-agnostic method based on counterfactual explanations is developed in order to further understand the model's predictions. Good agreement is found between the model's decision and the rules imposed by physics expectation. These results also motivate the usage of counterfactuals to simulate real-time control by generating the βN levels that would keep the RWM stable

    Risk of Unintended Pregnancy in Latina Young Adults: The Effect of Gender Role Beliefs, Acculturation, and Depression

    Get PDF
    This study investigated the effect of Latina gender role beliefs, or marianismo beliefs, on risk for unintended pregnancy by examining contraceptive method use in Latina young adults. Acculturation and depression were also examined as moderators of the association between marianismo and contraceptive method choice, as well as separately for their effects on contraceptive use. Unmarried, nulliparous Latina women aged 18-24 (N = 142) were recruited through online social media platforms. Data were collected in the United States in July 2017. Logistic regression analyses were performed to distinguish between women who utilized more effective v. less effective contraceptive methods in the past three months. Results indicated that Negative Marianismo Beliefs, including beliefs associated with virtuosity, subordination to men, and self-silencing, demonstrated a trend toward association with less effective contraceptive use. Positive Marianismo Beliefs, including beliefs associated with family and spiritual leadership, and marianismo beliefs overall, were not associated with contraceptive use. Acculturation was not associated with contraceptive use in logistic regression analyses; however, non-US birthplace showed a marginal correlation with less effective condom use specifically. Young women with depression in the present study were not more likely to use less effective contraception. An interaction effect with depression at the trend level suggested that women were likely to use contraception less effectively if they reported low levels of depression and high levels of Negative Marianismo Beliefs. Those reporting high levels of depression were no more likely to use contraception effectively if they had low or high levels of Negative Marianismo Beliefs. This study provides preliminary evidence that traditional gender role beliefs may impact Latina young women’s risk for unintended pregnancy. The findings also highlight the importance of cultural beliefs and values to sexual health outcomes
    • …
    corecore