3 research outputs found

    Transfer Learning Improving Predictive Mortality Models for Patients in End-Stage Renal Disease

    Get PDF
    Deep learning is becoming a fundamental piece in the paradigm shift from evidence-based to data-based medicine. However, its learning capacity is rarely exploited when working with small data sets. Through transfer learning (TL), information from a source domain is transferred to a target one to enhance a learning task in such domain. The proposed TL mechanisms are based on sample and feature space augmentation. Thus, deep autoencoders extract complex representations for the data in the TL approach. Their latent representations, the so-called codes, are handled to transfer information among domains. The transfer of samples is carried out by computing a latent space mapping matrix that links codes from both domains for later reconstruction. The feature space augmentation is based on the computation of the average of the most similar codes from one domain. Such an average augments the features in a target domain. The proposed framework is evaluated in the prediction of mortality in patients in end-stage renal disease, transferring information related to the mortality of patients with acute kidney injury from the massive database MIMIC-III. Compared to other TL mechanisms, the proposed approach improves 6-11% in previous mortality predictive models. The integration of TL approaches into learning tasks in pathologies with data volume issues could encourage the use of data-based medicine in a clinical setting

    A deep learning framework for Hybrid Heterogeneous Transfer Learning

    Full text link
    © 2019 Elsevier B.V. Most previous methods in heterogeneous transfer learning learn a cross-domain feature mapping between different domains based on some cross-domain instance-correspondences. Such instance-correspondences are assumed to be representative in the source domain and the target domain, respectively. However, in many real-world scenarios, this assumption may not hold. As a result, the constructed feature mapping may not be precise, and thus the transformed source-domain labeled data using the feature mapping are not useful to build an accurate classifier for the target domain. In this paper, we offer a new heterogeneous transfer learning framework named Hybrid Heterogeneous Transfer Learning (HHTL), which allows the selection of corresponding instances across domains to be biased to the source or target domain. Our basic idea is that though the corresponding instances are biased in the original feature space, there may exist other feature spaces, projected onto which, the corresponding instances may become unbiased or representative to the source domain and the target domain, respectively. With such a representation, a more precise feature mapping across heterogeneous feature spaces can be learned for knowledge transfer. We design several deep-learning-based architectures and algorithms that enable learning aligned representations. Extensive experiments on two multilingual classification datasets verify the effectiveness of our proposed HHTL framework and algorithms compared with some state-of-the-art methods
    corecore