4,306 research outputs found
Learning Tasks for Multitask Learning: Heterogenous Patient Populations in the ICU
Machine learning approaches have been effective in predicting adverse
outcomes in different clinical settings. These models are often developed and
evaluated on datasets with heterogeneous patient populations. However, good
predictive performance on the aggregate population does not imply good
performance for specific groups.
In this work, we present a two-step framework to 1) learn relevant patient
subgroups, and 2) predict an outcome for separate patient populations in a
multi-task framework, where each population is a separate task. We demonstrate
how to discover relevant groups in an unsupervised way with a
sequence-to-sequence autoencoder. We show that using these groups in a
multi-task framework leads to better predictive performance of in-hospital
mortality both across groups and overall. We also highlight the need for more
granular evaluation of performance when dealing with heterogeneous populations.Comment: KDD 201
Unsupervised patient representations from clinical notes with interpretable classification decisions
We have two main contributions in this work: 1. We explore the usage of a
stacked denoising autoencoder, and a paragraph vector model to learn
task-independent dense patient representations directly from clinical notes. We
evaluate these representations by using them as features in multiple supervised
setups, and compare their performance with those of sparse representations. 2.
To understand and interpret the representations, we explore the best encoded
features within the patient representations obtained from the autoencoder
model. Further, we calculate the significance of the input features of the
trained classifiers when we use these pretrained representations as input.Comment: Accepted poster at NIPS 2017 Workshop on Machine Learning for Health
(https://ml4health.github.io/2017/
Representation Learning With Autoencoders For Electronic Health Records
Increasing volume of Electronic Health Records (EHR) in recent years provides great opportunities for data scientists to collaborate on different aspects of healthcare research by applying advanced analytics to these EHR clinical data. A key requirement however
is obtaining meaningful insights from high dimensional, sparse and complex clinical data. Data science approaches typically address this challenge by performing feature learning in order to build more reliable and informative feature representations from clinical data followed by supervised learning. In this research, we propose a predictive modeling approach based on deep feature representations and word embedding techniques. Our method uses different deep architectures (stacked sparse autoencoders, deep belief network, adversarial autoencoders and variational autoencoders) for feature representation in higher-level abstraction to obtain effective and robust features from EHRs, and then build prediction models on top of them. Our approach is particularly useful when the unlabeled data is abundant whereas labeled data is scarce. We investigate the performance of representation learning through a supervised learning approach. Our focus is to present a comparative study to evaluate the performance of different deep architectures through supervised learning and provide insights for the choice of deep feature representation techniques. Our experiments demonstrate that for small data sets, stacked sparse autoencoder demonstrates a superior generality performance in prediction due to sparsity regularization whereas variational autoencoders outperform the competing approaches for large data sets due to its capability of learning the representation distribution
- …