268 research outputs found
Interpretability and Explainability: A Machine Learning Zoo Mini-tour
In this review, we examine the problem of designing interpretable and
explainable machine learning models. Interpretability and explainability lie at
the core of many machine learning and statistical applications in medicine,
economics, law, and natural sciences. Although interpretability and
explainability have escaped a clear universal definition, many techniques
motivated by these properties have been developed over the recent 30 years with
the focus currently shifting towards deep learning methods. In this review, we
emphasise the divide between interpretability and explainability and illustrate
these two different research directions with concrete examples of the
state-of-the-art. The review is intended for a general machine learning
audience with interest in exploring the problems of interpretation and
explanation beyond logistic regression or random forest variable importance.
This work is not an exhaustive literature survey, but rather a primer focusing
selectively on certain lines of research which the authors found interesting or
informative
MODEL INTERPRETATION AND EXPLAINABILITY Towards Creating Transparency in Prediction Models
Explainable AI (XAI) has a counterpart in analytical modeling which we refer to as model explainability. We tackle the issue of model explainability in the context of prediction models. We analyze a dataset of loans from a credit card company and apply three stages: execute and compare four different prediction methods, apply the best known explainability techniques in the current literature to the model training sets to identify feature importance (FI) (static case), and finally to cross-check whether the FI set holds up under âwhat ifâ prediction scenarios for continuous and categorical variables (dynamic case). We found inconsistency in FI identification between the static and dynamic cases. We summarize the âstate of the artâ in model explainability and suggest further research to advance the field
Machine learning in the social and health sciences
The uptake of machine learning (ML) approaches in the social and health
sciences has been rather slow, and research using ML for social and health
research questions remains fragmented. This may be due to the separate
development of research in the computational/data versus social and health
sciences as well as a lack of accessible overviews and adequate training in ML
techniques for non data science researchers. This paper provides a meta-mapping
of research questions in the social and health sciences to appropriate ML
approaches, by incorporating the necessary requirements to statistical analysis
in these disciplines. We map the established classification into description,
prediction, and causal inference to common research goals, such as estimating
prevalence of adverse health or social outcomes, predicting the risk of an
event, and identifying risk factors or causes of adverse outcomes. This
meta-mapping aims at overcoming disciplinary barriers and starting a fluid
dialogue between researchers from the social and health sciences and
methodologically trained researchers. Such mapping may also help to fully
exploit the benefits of ML while considering domain-specific aspects relevant
to the social and health sciences, and hopefully contribute to the acceleration
of the uptake of ML applications to advance both basic and applied social and
health sciences research
A Survey of Neural Trees
Neural networks (NNs) and decision trees (DTs) are both popular models of
machine learning, yet coming with mutually exclusive advantages and
limitations. To bring the best of the two worlds, a variety of approaches are
proposed to integrate NNs and DTs explicitly or implicitly. In this survey,
these approaches are organized in a school which we term as neural trees (NTs).
This survey aims to present a comprehensive review of NTs and attempts to
identify how they enhance the model interpretability. We first propose a
thorough taxonomy of NTs that expresses the gradual integration and
co-evolution of NNs and DTs. Afterward, we analyze NTs in terms of their
interpretability and performance, and suggest possible solutions to the
remaining challenges. Finally, this survey concludes with a discussion about
other considerations like conditional computation and promising directions
towards this field. A list of papers reviewed in this survey, along with their
corresponding codes, is available at:
https://github.com/zju-vipa/awesome-neural-treesComment: 35 pages, 7 figures and 1 tabl
Leveraging Explanations in Interactive Machine Learning: An Overview
Explanations have gained an increasing level of interest in the AI and
Machine Learning (ML) communities in order to improve model transparency and
allow users to form a mental model of a trained ML model. However, explanations
can go beyond this one way communication as a mechanism to elicit user control,
because once users understand, they can then provide feedback. The goal of this
paper is to present an overview of research where explanations are combined
with interactive capabilities as a mean to learn new models from scratch and to
edit and debug existing ones. To this end, we draw a conceptual map of the
state-of-the-art, grouping relevant approaches based on their intended purpose
and on how they structure the interaction, highlighting similarities and
differences between them. We also discuss open research issues and outline
possible directions forward, with the hope of spurring further research on this
blooming research topic
- âŠ