26 research outputs found

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    Bayesian Deep Learning for Cardiac Motion Modelling and Analysis

    Get PDF
    Cardiovascular diseases (CVDs) remain a primary cause of mortality globally, with an estimated 17.9 million deaths in 2019, accounting for 32% of all global fatalities. In recent decades, non-invasive imaging, particularly Magnetic Resonance Imaging (MRI), has become pivotal in diagnosing CVDs, offering high-resolution, multidimensional, and sequential cardiac data. However, the interpretation of cardiac MRI data is challenging, due to the complexities of cardiac motion and anatomical variations. Traditional manual methods are time-consuming and subject to variability. Deep learning (DL) methods, notably generative models, have recently advanced medical image analysis, offering state-of-the-art solutions for segmentation, registration, and motion modelling. This thesis encapsulates the development and validation of deep-learning frameworks in the field of cardiac motion modelling and analysis from sequential cardiac MRI scans. At its core, it introduces a probabilistic generative model for cardiac motion modelling, underpinned by temporal coherence, capable of synthesising new CMR sequences. Three models are derived from this foundational probabilistic model, each contributing to different aspects. Firstly, through the innovative application of gradient surgery techniques, we address the dual objectives of attaining high registration accuracy and ensuring the diffeomorphic characteristics of the predicted motion fields. Subsequently, we introduce the joint operation of ventricular segmentation and motion modelling. The proposed method combines anatomical precision with the dynamic temporal flow to enhance both the accuracy of motion modelling and the stability of sequential segmentation. Furthermore, we introduce a conditional motion transfer framework that leverages variational models for the generation of cardiac motion, enabling anomaly detection and the augmentation of data, particularly for pathologies that are less commonly represented in datasets. This capability to transfer and transform cardiac motion across healthy and pathological domains is set to revolutionize how clinicians and researchers understand and interpret cardiac function and anomalies. Collectively, these advancements present novelty and application potentials in cardiac image processing. The methodologies proposed herein have the potential to transform routine clinical diagnostics and interventions, allowing for more nuanced and detailed cardiac assessments. The probabilistic nature of these models promises to deliver not only more detailed insights into cardiac health but also to foster the development of personalised medicine approaches in cardiology

    From Manual to Automated Design of Biomedical Semantic Segmentation Methods

    Get PDF
    Digital imaging plays an increasingly important role in clinical practice. With the number of images that are routinely acquired on the rise, the number of experts devoted to analyzing them is by far not increasing as rapidly. This alarming disparity calls for automated image analysis methods to ease the burden on the experts and prevent a degradation of the quality of care. Semantic segmentation plays a central role in extracting clinically relevant information from images, either all by themselves or as part of more elaborate pipelines, and constitutes one of the most active fields of research in medical image analysis. Thereby, the diversity of datasets is mirrored by an equally diverse number of segmentation methods, each being optimized for the datasets they are addressing. The resulting diversity of methods does not come without downsides: The specialized nature of these segmentation methods causes a dataset dependency which makes them unable to be transferred to other segmentation problems. Not only does this result in issues with out-of-the-box applicability, but it also adversely affects future method development: Improvements over baselines that are demonstrated on one dataset rarely transfer to another, testifying a lack of reproducibility and causing a frustrating literature landscape in which it is difficult to discern veritable and long lasting methodological advances from noise. We study three different segmentation tasks in depth with the goal of understanding what makes a good segmentation model and which of the recently proposed methods are truly required to obtain competitive segmentation performance. To this end, we design state of the art segmentation models for brain tumor segmentation, cardiac substructure segmentation and kidney and kidney tumor segmentation. Each of our methods is evaluated in the context of international competitions, ensuring objective performance comparison with other methods. We obtained the third place in BraTS 2017, the second place in BraTS 2018, the first place in ACDC and the first place in the highly competitive KiTS challenge. Our analysis of the four segmentation methods reveals that competitive segmentation performance for all of these tasks can be achieved with a standard, but well-tuned U-Net architecture, which is surprising given the recent focus in the literature on finding better network architectures. Furthermore, we identify certain similarities between our segmentation pipelines and notice that their dissimilarities merely reflect well-structured adaptations in response to certain dataset properties. This leads to the hypothesis that we can identify a direct relation between the properties of a dataset and the design choices that lead to a good segmentation model for it. Based on this hypothesis we develop nnU-Net, the first method that breaks the dataset dependency of traditional segmentation methods. Traditional segmentation methods must be developed by experts, going through an iterative trial-and-error process until they have identified a good segmentation pipeline for a given dataset. This process ultimately results in a fixed pipeline configuration which may be incompatible with other datasets, requiring extensive re-optimization. In contrast, nnU-Net makes use of a generalizing method template that is dynamically and automatically adapted to each dataset it is applied to. This is achieved by condensing domain knowledge about the design of segmentation methods into inductive biases. Specifically, we identify certain pipeline hyperparameters that do not need to be adapted and for which a good default value can be set for all datasets (called blueprint parameters). They are complemented with a comprehensible set of heuristic rules, which explicitly encode how the segmentation pipeline and the network architecture that is used along with it must be adapted for each dataset (inferred parameters). Finally, a limited number of design choices is determined through empirical evaluation (empirical parameters). Following the analysis of our previously designed specialized pipelines, the basic network architecture type used is the standard U-Net, coining the name of our method: nnU-Net (”No New Net”). We apply nnU-Net to 19 diverse datasets originating from segmentation competitions in the biomedical domain. Despite being applied without manual intervention, nnU-Net sets a new state of the art in 29 out of the 49 different segmentation tasks encountered in these datasets. This is remarkable considering that nnU-Net competed against specialized manually tuned algorithms on each of them. nnU-Net is the first out-of-the-box tool that makes state of the art semantic segmentation methods accessible to non-experts. As a framework, it catalyzes future method development: new design concepts can be implemented into nnU-Net and leverage its dynamic nature to be evaluated across a wide variety of datasets without the need for manual re-tuning. In conclusion, the thesis presented here exposed critical weaknesses in the current way of segmentation method development. The dataset dependency of segmentation methods impedes scientific progress by confining researchers to a subset of datasets available in the domain, causing noisy evaluation and in turn a literature landscape in which results are difficult to reproduce and true methodological advances are difficult to discern. Additionally, non-experts were barred access to state of the art segmentation for their custom datasets because method development is a time consuming trial-and-error process that needs expertise to be done correctly. We propose to address this situation with nnU-Net, a segmentation method that automatically and dynamically adapts itself to arbitrary datasets, not only making out-of-the-box segmentation available for everyone but also enabling more robust decision making in the development of segmentation methods by enabling easy and convenient evaluation across multiple datasets

    Deep Learning in Cardiac Magnetic Resonance Image Analysis and Cardiovascular Disease Diagnosis

    Get PDF
    Cardiovascular diseases (CVDs) are the leading cause of death in the world, accounting for 17.9 million deaths each year, 31\% of all global deaths. According to the World Health Organisation (WHO), this number is expected to rise to 23 million by 2030. As a noninvasive technique, medical imaging with corresponding computer vision techniques is becoming more and more popular for detecting, understanding, and analysing CVDs. With the advent of deep learning, there are significant improvements in medical image analysis tasks (e.g. image registration, image segmentation, mesh reconstruction from image), achieving much faster and more accurate registration, segmentation, reconstruction, and disease diagnosis. This thesis focuses on cardiac magnetic resonance images, systematically studying critical tasks in CVD analysis, including image registration, image segmentation, cardiac mesh reconstruction, and CVD prediction/diagnosis. We first present a thorough review of deep learning-based image registration approaches, and subsequently, propose a novel solution to the problem of discontinuity-preserving intra-subject cardiac image registration, which is generally ignored in previous deep learning-based registration methods. On the basis of this, a joint segmentation and registration framework is further proposed to learn the joint relationship between these two tasks, leading to better registration and segmentation performance. In order to characterise the shape and motion of the heart in 3D, we present a deep learning-based 3D mesh reconstruction network that is able to recover accurate 3D cardiac shapes from 2D slice-wise segmentation masks/contours in a fast and robust manner. Finally, for CVD prediction/diagnosis, we design a multichannel variational autoencoder to learn the joint latent representation of the original cardiac image and mesh, resulting in a shape-aware image representation (SAIR) that serves as an explainable biomarker. SAIR has been shown to outperform traditional biomarkers in the prediction of acute myocardial infarction and the diagnosis of several other CVDs, and can supplement existing biomarkers to improve overall predictive performance
    corecore