57,661 research outputs found
Multimodal Machine Learning for Automated ICD Coding
This study presents a multimodal machine learning model to predict ICD-10
diagnostic codes. We developed separate machine learning models that can handle
data from different modalities, including unstructured text, semi-structured
text and structured tabular data. We further employed an ensemble method to
integrate all modality-specific models to generate ICD-10 codes. Key evidence
was also extracted to make our prediction more convincing and explainable. We
used the Medical Information Mart for Intensive Care III (MIMIC -III) dataset
to validate our approach. For ICD code prediction, our best-performing model
(micro-F1 = 0.7633, micro-AUC = 0.9541) significantly outperforms other
baseline models including TF-IDF (micro-F1 = 0.6721, micro-AUC = 0.7879) and
Text-CNN model (micro-F1 = 0.6569, micro-AUC = 0.9235). For interpretability,
our approach achieves a Jaccard Similarity Coefficient (JSC) of 0.1806 on text
data and 0.3105 on tabular data, where well-trained physicians achieve 0.2780
and 0.5002 respectively.Comment: Machine Learning for Healthcare 201
The Grammar of Interactive Explanatory Model Analysis
The growing need for in-depth analysis of predictive models leads to a series
of new methods for explaining their local and global properties. Which of these
methods is the best? It turns out that this is an ill-posed question. One
cannot sufficiently explain a black-box machine learning model using a single
method that gives only one perspective. Isolated explanations are prone to
misunderstanding, which inevitably leads to wrong or simplistic reasoning. This
problem is known as the Rashomon effect and refers to diverse, even
contradictory interpretations of the same phenomenon. Surprisingly, the
majority of methods developed for explainable machine learning focus on a
single aspect of the model behavior. In contrast, we showcase the problem of
explainability as an interactive and sequential analysis of a model. This paper
presents how different Explanatory Model Analysis (EMA) methods complement each
other and why it is essential to juxtapose them together. The introduced
process of Interactive EMA (IEMA) derives from the algorithmic side of
explainable machine learning and aims to embrace ideas developed in cognitive
sciences. We formalize the grammar of IEMA to describe potential human-model
dialogues. IEMA is implemented in the human-centered framework that adopts
interactivity, customizability and automation as its main traits. Combined,
these methods enhance the responsible approach to predictive modeling.Comment: 17 pages, 10 figures, 3 table
An Interpretable Machine Vision Approach to Human Activity Recognition using Photoplethysmograph Sensor Data
The current gold standard for human activity recognition (HAR) is based on
the use of cameras. However, the poor scalability of camera systems renders
them impractical in pursuit of the goal of wider adoption of HAR in mobile
computing contexts. Consequently, researchers instead rely on wearable sensors
and in particular inertial sensors. A particularly prevalent wearable is the
smart watch which due to its integrated inertial and optical sensing
capabilities holds great potential for realising better HAR in a non-obtrusive
way. This paper seeks to simplify the wearable approach to HAR through
determining if the wrist-mounted optical sensor alone typically found in a
smartwatch or similar device can be used as a useful source of data for
activity recognition. The approach has the potential to eliminate the need for
the inertial sensing element which would in turn reduce the cost of and
complexity of smartwatches and fitness trackers. This could potentially
commoditise the hardware requirements for HAR while retaining the functionality
of both heart rate monitoring and activity capture all from a single optical
sensor. Our approach relies on the adoption of machine vision for activity
recognition based on suitably scaled plots of the optical signals. We take this
approach so as to produce classifications that are easily explainable and
interpretable by non-technical users. More specifically, images of
photoplethysmography signal time series are used to retrain the penultimate
layer of a convolutional neural network which has initially been trained on the
ImageNet database. We then use the 2048 dimensional features from the
penultimate layer as input to a support vector machine. Results from the
experiment yielded an average classification accuracy of 92.3%. This result
outperforms that of an optical and inertial sensor combined (78%) and
illustrates the capability of HAR systems using...Comment: 26th AIAI Irish Conference on Artificial Intelligence and Cognitive
Scienc
- …
