270 research outputs found

    Fine-tuned convolutional neural nets for cardiac MRI acquisition plane recognition

    Get PDF
    This is an electronic version of an article published inComputer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualizationon 13 August 2015, by Taylor & Francis, DOI: 10.1080/21681163.2015.1061448.Available online at: http://www.tandfonline.com/10.1080/21681163.2015.1061448.International audienceIn this paper, we propose a convolutional neural network-based method to automatically retrieve missing or noisy cardiac acquisition plane information from magnetic resonance imaging and predict the five most common cardiac views. We fine-tune a convolutional neural network (CNN) initially trained on a large natural image recognition data-set (Imagenet ILSVRC2012) and transfer the learnt feature representations to cardiac view recognition. We contrast this approach with a previously introduced method using classification forests and an augmented set of image miniatures, with prediction using off the shelf CNN features, and with CNNs learnt from scratch. We validate this algorithm on two different cardiac studies with 200 patients and 15 healthy volunteers, respectively. We show that there is value in fine-tuning a model trained for natural images to transfer it to medical images. Our approach achieves an average F1 score of 97.66% and significantly improves the state-of-the-art of image-based cardiac view recognition. This is an important building block to organise and filter large collections of cardiac data prior to further analysis. It allows us to merge studies from multiple centres, to perform smarter image filtering, to select the most appropriate image processing algorithm, and to enhance visualisation of cardiac data-sets in content-based image retrieval

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Image Quality Assessment for Population Cardiac MRI: From Detection to Synthesis

    Get PDF
    Cardiac magnetic resonance (CMR) images play a growing role in diagnostic imaging of cardiovascular diseases. Left Ventricular (LV) cardiac anatomy and function are widely used for diagnosis and monitoring disease progression in cardiology and to assess the patient's response to cardiac surgery and interventional procedures. For population imaging studies, CMR is arguably the most comprehensive imaging modality for non-invasive and non-ionising imaging of the heart and great vessels and, hence, most suited for population imaging cohorts. Due to insufficient radiographer's experience in planning a scan, natural cardiac muscle contraction, breathing motion, and imperfect triggering, CMR can display incomplete LV coverage, which hampers quantitative LV characterization and diagnostic accuracy. To tackle this limitation and enhance the accuracy and robustness of the automated cardiac volume and functional assessment, this thesis focuses on the development and application of state-of-the-art deep learning (DL) techniques in cardiac imaging. Specifically, we propose new image feature representation types that are learnt with DL models and aimed at highlighting the CMR image quality cross-dataset. These representations are also intended to estimate the CMR image quality for better interpretation and analysis. Moreover, we investigate how quantitative analysis can benefit when these learnt image representations are used in image synthesis. Specifically, a 3D fisher discriminative representation is introduced to identify CMR image quality in the UK Biobank cardiac data. Additionally, a novel adversarial learning (AL) framework is introduced for the cross-dataset CMR image quality assessment and we show that the common representations learnt by AL can be useful and informative for cross-dataset CMR image analysis. Moreover, we utilize the dataset invariance (DI) representations for CMR volumes interpolation by introducing a novel generative adversarial nets (GANs) based image synthesis framework, which enhance the CMR image quality cross-dataset

    Optimization of Deep CNN Techniques to Classify Breast Cancer and Predict Relapse

    Get PDF
    Breast cancer is a fatal disease that has a high rate of morbidity and mortality. Finding the right diagnosis is one of the most crucial steps in breast cancer treatment. Doctors can use machine learning (ML) and deep learning techniques to aid with diagnosis. This work makes an effort to devise a methodology for the classification of Breast cancer into its molecular subtypes and prediction of relapse. The objective is to compare the performance of Deep CNN, Tuned CNN and Hypercomplex-Valued CNN, and infer the results, thus automating the classification process. The traditional method used by doctors to detect is tedious and time consuming. It employs multiple methods, including MRI, CT scanning, aspiration, and blood tests as well as image testing. The proposed approach uses image processing techniques to detect irregular breast tissues in the MRI. The survivors of Breast Cancer are still at risk for relapse after remission, and once the disease relapses, the survival rate is much lower. A thorough analysis of data can potentially identify risk factors and reduce the risk of relapse in the first place. A SVM (Support Vector Machine) module with GridSearchCV for hyperparameter tuning is used to identify patterns in those patients who experience a relapse, so that these patterns can be used to predict the relapse before it occurs. The traditional deep learning CNN model achieved an accuracy of 27%, the tuned CNN model achieved an accuracy of 92% and the hypercomplex-valued CNN achieved an accuracy of 98%. The SVM model achieved an accuracy of 89% and on tuning the hyperparameters by using GridSearchCV it achieved and accuracy of 98%

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    Attention Gated Networks: Learning to Leverage Salient Regions in Medical Images

    Get PDF
    We propose a novel attention gate (AG) model for medical image analysis that automatically learns to focus on target structures of varying shapes and sizes. Models trained with AGs implicitly learn to suppress irrelevant regions in an input image while highlighting salient features useful for a specific task. This enables us to eliminate the necessity of using explicit external tissue/organ localisation modules when using convolutional neural networks (CNNs). AGs can be easily integrated into standard CNN models such as VGG or U-Net architectures with minimal computational overhead while increasing the model sensitivity and prediction accuracy. The proposed AG models are evaluated on a variety of tasks, including medical image classification and segmentation. For classification, we demonstrate the use case of AGs in scan plane detection for fetal ultrasound screening. We show that the proposed attention mechanism can provide efficient object localisation while improving the overall prediction performance by reducing false positives. For segmentation, the proposed architecture is evaluated on two large 3D CT abdominal datasets with manual annotations for multiple organs. Experimental results show that AG models consistently improve the prediction performance of the base architectures across different datasets and training sizes while preserving computational efficiency. Moreover, AGs guide the model activations to be focused around salient regions, which provides better insights into how model predictions are made. The source code for the proposed AG models is publicly available.Comment: Accepted for Medical Image Analysis (Special Issue on Medical Imaging with Deep Learning). arXiv admin note: substantial text overlap with arXiv:1804.03999, arXiv:1804.0533

    From Manual to Automated Design of Biomedical Semantic Segmentation Methods

    Get PDF
    Digital imaging plays an increasingly important role in clinical practice. With the number of images that are routinely acquired on the rise, the number of experts devoted to analyzing them is by far not increasing as rapidly. This alarming disparity calls for automated image analysis methods to ease the burden on the experts and prevent a degradation of the quality of care. Semantic segmentation plays a central role in extracting clinically relevant information from images, either all by themselves or as part of more elaborate pipelines, and constitutes one of the most active fields of research in medical image analysis. Thereby, the diversity of datasets is mirrored by an equally diverse number of segmentation methods, each being optimized for the datasets they are addressing. The resulting diversity of methods does not come without downsides: The specialized nature of these segmentation methods causes a dataset dependency which makes them unable to be transferred to other segmentation problems. Not only does this result in issues with out-of-the-box applicability, but it also adversely affects future method development: Improvements over baselines that are demonstrated on one dataset rarely transfer to another, testifying a lack of reproducibility and causing a frustrating literature landscape in which it is difficult to discern veritable and long lasting methodological advances from noise. We study three different segmentation tasks in depth with the goal of understanding what makes a good segmentation model and which of the recently proposed methods are truly required to obtain competitive segmentation performance. To this end, we design state of the art segmentation models for brain tumor segmentation, cardiac substructure segmentation and kidney and kidney tumor segmentation. Each of our methods is evaluated in the context of international competitions, ensuring objective performance comparison with other methods. We obtained the third place in BraTS 2017, the second place in BraTS 2018, the first place in ACDC and the first place in the highly competitive KiTS challenge. Our analysis of the four segmentation methods reveals that competitive segmentation performance for all of these tasks can be achieved with a standard, but well-tuned U-Net architecture, which is surprising given the recent focus in the literature on finding better network architectures. Furthermore, we identify certain similarities between our segmentation pipelines and notice that their dissimilarities merely reflect well-structured adaptations in response to certain dataset properties. This leads to the hypothesis that we can identify a direct relation between the properties of a dataset and the design choices that lead to a good segmentation model for it. Based on this hypothesis we develop nnU-Net, the first method that breaks the dataset dependency of traditional segmentation methods. Traditional segmentation methods must be developed by experts, going through an iterative trial-and-error process until they have identified a good segmentation pipeline for a given dataset. This process ultimately results in a fixed pipeline configuration which may be incompatible with other datasets, requiring extensive re-optimization. In contrast, nnU-Net makes use of a generalizing method template that is dynamically and automatically adapted to each dataset it is applied to. This is achieved by condensing domain knowledge about the design of segmentation methods into inductive biases. Specifically, we identify certain pipeline hyperparameters that do not need to be adapted and for which a good default value can be set for all datasets (called blueprint parameters). They are complemented with a comprehensible set of heuristic rules, which explicitly encode how the segmentation pipeline and the network architecture that is used along with it must be adapted for each dataset (inferred parameters). Finally, a limited number of design choices is determined through empirical evaluation (empirical parameters). Following the analysis of our previously designed specialized pipelines, the basic network architecture type used is the standard U-Net, coining the name of our method: nnU-Net (”No New Net”). We apply nnU-Net to 19 diverse datasets originating from segmentation competitions in the biomedical domain. Despite being applied without manual intervention, nnU-Net sets a new state of the art in 29 out of the 49 different segmentation tasks encountered in these datasets. This is remarkable considering that nnU-Net competed against specialized manually tuned algorithms on each of them. nnU-Net is the first out-of-the-box tool that makes state of the art semantic segmentation methods accessible to non-experts. As a framework, it catalyzes future method development: new design concepts can be implemented into nnU-Net and leverage its dynamic nature to be evaluated across a wide variety of datasets without the need for manual re-tuning. In conclusion, the thesis presented here exposed critical weaknesses in the current way of segmentation method development. The dataset dependency of segmentation methods impedes scientific progress by confining researchers to a subset of datasets available in the domain, causing noisy evaluation and in turn a literature landscape in which results are difficult to reproduce and true methodological advances are difficult to discern. Additionally, non-experts were barred access to state of the art segmentation for their custom datasets because method development is a time consuming trial-and-error process that needs expertise to be done correctly. We propose to address this situation with nnU-Net, a segmentation method that automatically and dynamically adapts itself to arbitrary datasets, not only making out-of-the-box segmentation available for everyone but also enabling more robust decision making in the development of segmentation methods by enabling easy and convenient evaluation across multiple datasets
    • …
    corecore