39 research outputs found

    Explainable AI: A review of applications to neuroimaging data

    Get PDF
    Deep neural networks (DNNs) have transformed the field of computer vision and currently constitute some of the best models for representations learned via hierarchical processing in the human brain. In medical imaging, these models have shown human-level performance and even higher in the early diagnosis of a wide range of diseases. However, the goal is often not only to accurately predict group membership or diagnose but also to provide explanations that support the model decision in a context that a human can readily interpret. The limited transparency has hindered the adoption of DNN algorithms across many domains. Numerous explainable artificial intelligence (XAI) techniques have been developed to peer inside the “black box” and make sense of DNN models, taking somewhat divergent approaches. Here, we suggest that these methods may be considered in light of the interpretation goal, including functional or mechanistic interpretations, developing archetypal class instances, or assessing the relevance of certain features or mappings on a trained model in a post-hoc capacity. We then focus on reviewing recent applications of post-hoc relevance techniques as applied to neuroimaging data. Moreover, this article suggests a method for comparing the reliability of XAI methods, especially in deep neural networks, along with their advantages and pitfalls

    xxAI - Beyond Explainable AI

    Get PDF
    This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science

    An investigation into explanations for convolutional neural networks

    Get PDF
    As deep learning techniques have become more prevalent in computer vision, the need to explain these so called ‘black boxes’ has increased. Indeed, these techniques are now being developed and deployed in such sensitive areas as medical imaging, autonomous vehicles, and security applications. Being able to create reliable explanations of their operations is therefore essential. For images, a common method for explaining the predictions of a convolutional neural network is to highlight the regions of an input image that are deemed important. Many techniques have been proposed, however these are often constrained to produce an explanation with a certain level of coarseness. Explanations can be created that either score individual pixels, or score large regions of an image as a whole. It is difficult to create an explanation with a level of coarseness that falls in between these two. A potentially even greater problem is that none of these explanation techniques have been designed to explain what happens when a network fails to obtain the correct prediction. In these instances, current explanation techniques are not useful. In this thesis, we propose two novel techniques that are able to efficiently create explanations that are neither too fine or too coarse. The first of these techniques uses superpixels weighted with gradients to create explanations of any desirable coarseness (within computational constraints). We show that we are able to produce explanations in an efficient way that have a higher accuracy than comparable existing methods. In addition, we find that our technique can be used in conjunction with existing techniques such as LIME to improve their accuracy. This is subsequently shown to generalise well for use in networks that use video as an input. The second of these techniques is to create multiple explanations using a rescaled input image to allow for finer features to be found. We show this performs much better than comparable techniques in both accuracy and weak-localisation metrics. With this technique, we also show that a common metric, faithfulness, is a flawed metric, and recommend its use be discontinued. Finally, we propose a third novel technique to address the issue of explaining failure using the concepts of surprise and expectation. By building an understanding of how a model has learnt to represent the training data, we can begin to explore the reasons for failure. Using this technique, we show that we can highlight regions in the image that have caused failure, explore features that may be missing from a misclassified image, and provide an insightful method to explore an unseen portion of a dataset

    Algorithmic Reason

    Get PDF
    Are algorithms ruling the world today? Is artificial intelligence making life-and-death decisions? Are social media companies able to manipulate elections? As we are confronted with public and academic anxieties about unprecedented changes, this book offers a different analytical prism to investigate these transformations as more mundane and fraught. Aradau and Blanke develop conceptual and methodological tools to understand how algorithmic operations shape the government of self and other. While disperse and messy, these operations are held together by an ascendant algorithmic reason. Through a global perspective on algorithmic operations, the book helps us understand how algorithmic reason redraws boundaries and reconfigures differences. The book explores the emergence of algorithmic reason through rationalities, materializations, and interventions. It traces how algorithmic rationalities of decomposition, recomposition, and partitioning are materialized in the construction of dangerous others, the power of platforms, and the production of economic value. The book shows how political interventions to make algorithms governable encounter friction, refusal, and resistance. The theoretical perspective on algorithmic reason is developed through qualitative and digital methods to investigate scenes and controversies that range from mass surveillance and the Cambridge Analytica scandal in the UK to predictive policing in the US, and from the use of facial recognition in China and drone targeting in Pakistan to the regulation of hate speech in Germany. Algorithmic Reason offers an alternative to dystopia and despair through a transdisciplinary approach made possible by the authors’ backgrounds, which span the humanities, social sciences, and computer sciences
    corecore