6,600 research outputs found

    Improving interpretability and regularization in deep learning

    Get PDF
    Deep learning approaches yield state-of-the-art performance in a range of tasks, including automatic speech recognition. However, the highly distributed representation in a deep neural network (DNN) or other network variations is difficult to analyze, making further parameter interpretation and regularization challenging. This paper presents a regularization scheme acting on the activation function output to improve the network interpretability and regularization. The proposed approach, referred to as activation regularization, encourages activation function outputs to satisfy a target pattern. By defining appropriate target patterns, different learning concepts can be imposed on the network. This method can aid network interpretability and also has the potential to reduce overfitting. The scheme is evaluated on several continuous speech recognition tasks: the Wall Street Journal continuous speech recognition task, eight conversational telephone speech tasks from the IARPA Babel program and a U.S. English broadcast news task. On all the tasks, the activation regularization achieved consistent performance gains over the standard DNN baselines

    Assessing spatiotemporal correlations from data for short-term traffic prediction using multi-task learning

    Get PDF
    Traffic flow prediction is a fundamental problem for efficient transportation control and management. However, most current data-driven traffic prediction work found in the literature have focused on predicting traffic from an individual task perspective, and have not fully leveraged the implicit knowledge present in a road-network through space and time correlations. Such correlations are now far easier to isolate due to the recent profusion of traffic data sources and more specifically their wide geographic spread. In this paper, we take a multi-task learning (MTL) approach whose fundamental aim is to improve the generalization performance by leveraging the domain-specific information contained in related tasks that are jointly learned. In addition, another common factor found in the literature is that a historical dataset is used for the calibration and the assessment of the proposed approach, without dealing in any explicit or implicit way with the frequent challenges found in real-time prediction. In contrast, we adopt a different approach which faces this problem from a point of view of streams of data, and thus the learning procedure is undertaken online, giving greater importance to the most recent data, making data-driven decisions online, and undoing decisions which are no longer optimal. In the experiments presented we achieve a more compact and consistent knowledge in the form of rules automatically extracted from data, while maintaining or even improving, in some cases, the performance over single-task learning (STL).Peer ReviewedPostprint (published version

    Beyond Sparsity: Tree Regularization of Deep Models for Interpretability

    Get PDF
    The lack of interpretability remains a key barrier to the adoption of deep models in many applications. In this work, we explicitly regularize deep models so human users might step through the process behind their predictions in little time. Specifically, we train deep time-series models so their class-probability predictions have high accuracy while being closely modeled by decision trees with few nodes. Using intuitive toy examples as well as medical tasks for treating sepsis and HIV, we demonstrate that this new tree regularization yields models that are easier for humans to simulate than simpler L1 or L2 penalties without sacrificing predictive power.Comment: To appear in AAAI 2018. Contains 9-page main paper and appendix with supplementary materia

    Multimodal Machine Learning for Automated ICD Coding

    Full text link
    This study presents a multimodal machine learning model to predict ICD-10 diagnostic codes. We developed separate machine learning models that can handle data from different modalities, including unstructured text, semi-structured text and structured tabular data. We further employed an ensemble method to integrate all modality-specific models to generate ICD-10 codes. Key evidence was also extracted to make our prediction more convincing and explainable. We used the Medical Information Mart for Intensive Care III (MIMIC -III) dataset to validate our approach. For ICD code prediction, our best-performing model (micro-F1 = 0.7633, micro-AUC = 0.9541) significantly outperforms other baseline models including TF-IDF (micro-F1 = 0.6721, micro-AUC = 0.7879) and Text-CNN model (micro-F1 = 0.6569, micro-AUC = 0.9235). For interpretability, our approach achieves a Jaccard Similarity Coefficient (JSC) of 0.1806 on text data and 0.3105 on tabular data, where well-trained physicians achieve 0.2780 and 0.5002 respectively.Comment: Machine Learning for Healthcare 201
    • …
    corecore