3,511 research outputs found
Multimodal Machine Learning for Automated ICD Coding
This study presents a multimodal machine learning model to predict ICD-10
diagnostic codes. We developed separate machine learning models that can handle
data from different modalities, including unstructured text, semi-structured
text and structured tabular data. We further employed an ensemble method to
integrate all modality-specific models to generate ICD-10 codes. Key evidence
was also extracted to make our prediction more convincing and explainable. We
used the Medical Information Mart for Intensive Care III (MIMIC -III) dataset
to validate our approach. For ICD code prediction, our best-performing model
(micro-F1 = 0.7633, micro-AUC = 0.9541) significantly outperforms other
baseline models including TF-IDF (micro-F1 = 0.6721, micro-AUC = 0.7879) and
Text-CNN model (micro-F1 = 0.6569, micro-AUC = 0.9235). For interpretability,
our approach achieves a Jaccard Similarity Coefficient (JSC) of 0.1806 on text
data and 0.3105 on tabular data, where well-trained physicians achieve 0.2780
and 0.5002 respectively.Comment: Machine Learning for Healthcare 201
A Robust Interpretable Deep Learning Classifier for Heart Anomaly Detection Without Segmentation
Traditionally, abnormal heart sound classification is framed as a three-stage
process. The first stage involves segmenting the phonocardiogram to detect
fundamental heart sounds; after which features are extracted and classification
is performed. Some researchers in the field argue the segmentation step is an
unwanted computational burden, whereas others embrace it as a prior step to
feature extraction. When comparing accuracies achieved by studies that have
segmented heart sounds before analysis with those who have overlooked that
step, the question of whether to segment heart sounds before feature extraction
is still open. In this study, we explicitly examine the importance of heart
sound segmentation as a prior step for heart sound classification, and then
seek to apply the obtained insights to propose a robust classifier for abnormal
heart sound detection. Furthermore, recognizing the pressing need for
explainable Artificial Intelligence (AI) models in the medical domain, we also
unveil hidden representations learned by the classifier using model
interpretation techniques. Experimental results demonstrate that the
segmentation plays an essential role in abnormal heart sound classification.
Our new classifier is also shown to be robust, stable and most importantly,
explainable, with an accuracy of almost 100% on the widely used PhysioNet
dataset
The Intuitive Appeal of Explainable Machines
Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties. Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible. In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself
Towards Clinical Prediction with Transparency: An Explainable AI Approach to Survival Modelling in Residential Aged Care
Background: Accurate survival time estimates aid end-of-life medical
decision-making. Objectives: Develop an interpretable survival model for
elderly residential aged care residents using advanced machine learning.
Setting: A major Australasian residential aged care provider. Participants:
Residents aged 65+ admitted for long-term care from July 2017 to August 2023.
Sample size: 11,944 residents across 40 facilities. Predictors: Factors include
age, gender, health status, co-morbidities, cognitive function, mood,
nutrition, mobility, smoking, sleep, skin integrity, and continence. Outcome:
Probability of survival post-admission, specifically calibrated for 6-month
survival estimates. Statistical Analysis: Tested CoxPH, EN, RR, Lasso, GB, XGB,
and RF models in 20 experiments with a 90/10 train/test split. Evaluated
accuracy using C-index, Harrell's C-index, dynamic AUROC, IBS, and calibrated
ROC. Chose XGB for its performance and calibrated it for 1, 3, 6, and 12-month
predictions using Platt scaling. Employed SHAP values to analyze predictor
impacts. Results: GB, XGB, and RF models showed the highest C-Index values
(0.714, 0.712, 0.712). The optimal XGB model demonstrated a 6-month survival
prediction AUROC of 0.746 (95% CI 0.744-0.749). Key mortality predictors
include age, male gender, mobility, health status, pressure ulcer risk, and
appetite. Conclusions: The study successfully applies machine learning to
create a survival model for aged care, aligning with clinical insights on
mortality risk factors and enhancing model interpretability and clinical
utility through explainable AI
A Survey of Multimodal Information Fusion for Smart Healthcare: Mapping the Journey from Data to Wisdom
Multimodal medical data fusion has emerged as a transformative approach in
smart healthcare, enabling a comprehensive understanding of patient health and
personalized treatment plans. In this paper, a journey from data to information
to knowledge to wisdom (DIKW) is explored through multimodal fusion for smart
healthcare. We present a comprehensive review of multimodal medical data fusion
focused on the integration of various data modalities. The review explores
different approaches such as feature selection, rule-based systems, machine
learning, deep learning, and natural language processing, for fusing and
analyzing multimodal data. This paper also highlights the challenges associated
with multimodal fusion in healthcare. By synthesizing the reviewed frameworks
and theories, it proposes a generic framework for multimodal medical data
fusion that aligns with the DIKW model. Moreover, it discusses future
directions related to the four pillars of healthcare: Predictive, Preventive,
Personalized, and Participatory approaches. The components of the comprehensive
survey presented in this paper form the foundation for more successful
implementation of multimodal fusion in smart healthcare. Our findings can guide
researchers and practitioners in leveraging the power of multimodal fusion with
the state-of-the-art approaches to revolutionize healthcare and improve patient
outcomes.Comment: This work has been submitted to the ELSEVIER for possible
publication. Copyright may be transferred without notice, after which this
version may no longer be accessibl
A Framework for AI-enabled Proactive mHealth with Automated Decision-making for a User’s Context
Health promotion is to enable people to take control over their health. Digital health with mHealth empowers
users to establish proactive health, ubiquitously. The users shall have increased control over their health to
improve their life by being proactive. To develop proactive health with the principles of prediction, prevention,
and ubiquitous health, artificial intelligence with mHealth can play a pivotal role. There are various challenges
for establishing proactive mHealth. For example, the system must be adaptive and provide timely
interventions by considering the uniqueness of the user. The context of the user is also highly relevant for
proactive mHealth. The context provides parameters as input along with information to formulate the current
state of the user. Automated decision-making is significant with user-level decision-making as it enables
decisions to promote well-being by technological means without human involvement. This paper presents a
design framework of AI-enabled proactive mHealth that includes automated decision-making with predictive
analytics, Just-in-time adaptive interventions and a P5 approach to mHealth. The significance of user-level
decision-making for automated decision-making is presented. Furthermore, the paper provides a holistic view
of the user's context with profile and characteristics. The paper also discusses the need for multiple parameters
as inputs, and the identification of sources e.g., wearables, sensors, and other resources, with the challenges
in the implementation of the framework. Finally, a proof-of-concept based on the framework provides design
and implementation steps, architecture, goals, and feedback process. The framework shall provide the basis
for the further development of AI-enabled proactive mHealth
- …