368 research outputs found

    Evaluation of state-of-the-art segmentation algorithms for left ventricle infarct from late Gadolinium enhancement MR images

    Get PDF
    Studies have demonstrated the feasibility of late Gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) imaging for guiding the management of patients with sequelae to myocardial infarction, such as ventricular tachycardia and heart failure. Clinical implementation of these developments necessitates a reproducible and reliable segmentation of the infarcted regions. It is challenging to compare new algorithms for infarct segmentation in the left ventricle (LV) with existing algorithms. Benchmarking datasets with evaluation strategies are much needed to facilitate comparison. This manuscript presents a benchmarking evaluation framework for future algorithms that segment infarct from LGE CMR of the LV. The image database consists of 30 LGE CMR images of both humans and pigs that were acquired from two separate imaging centres. A consensus ground truth was obtained for all data using maximum likelihood estimation. Six widely-used fixed-thresholding methods and five recently developed algorithms are tested on the benchmarking framework. Results demonstrate that the algorithms have better overlap with the consensus ground truth than most of the n-SD fixed-thresholding methods, with the exception of the FullWidth-at-Half-Maximum (FWHM) fixed-thresholding method. Some of the pitfalls of fixed thresholding methods are demonstrated in this work. The benchmarking evaluation framework, which is a contribution of this work, can be used to test and benchmark future algorithms that detect and quantify infarct in LGE CMR images of the LV. The datasets, ground truth and evaluation code have been made publicly available through the website: https://www.cardiacatlas.org/web/guest/challenges

    Deep Learning in Cardiology

    Full text link
    The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.Comment: 27 pages, 2 figures, 10 table

    Bayesian Inference with Combined Dynamic and Sparsity Models: Application in 3D Electrophysiological Imaging

    Get PDF
    Data-driven inference is widely encountered in various scientific domains to convert the observed measurements into information that cannot be directly observed about a system. Despite the quickly-developing sensor and imaging technologies, in many domains, data collection remains an expensive endeavor due to financial and physical constraints. To overcome the limits in data and to reduce the demand on expensive data collection, it is important to incorporate prior information in order to place the data-driven inference in a domain-relevant context and to improve its accuracy. Two sources of assumptions have been used successfully in many inverse problem applications. One is the temporal dynamics of the system (dynamic structure). The other is the low-dimensional structure of a system (sparsity structure). In existing work, these two structures have often been explored separately, while in most high-dimensional dynamic system they are commonly co-existing and contain complementary information. In this work, our main focus is to build a robustness inference framework to combine dynamic and sparsity constraints. The driving application in this work is a biomedical inverse problem of electrophysiological (EP) imaging, which noninvasively and quantitatively reconstruct transmural action potentials from body-surface voltage data with the goal to improve cardiac disease prevention, diagnosis, and treatment. The general framework can be extended to a variety of applications that deal with the inference of high-dimensional dynamic systems

    Flow pattern analysis for magnetic resonance velocity imaging

    Get PDF
    Blood flow in the heart is highly complex. Although blood flow patterns have been investigated by both computational modelling and invasive/non-invasive imaging techniques, their evolution and intrinsic connection with cardiovascular disease has yet to be explored. Magnetic resonance (MR) velocity imaging provides a comprehensive distribution of multi-directional in vivo flow distribution so that detailed quantitative analysis of flow patterns is now possible. However, direct visualisation or quantification of vector fields is of little clinical use, especially for inter-subject or serial comparison of changes in flow patterns due to the progression of the disease or in response to therapeutic measures. In order to achieve a comprehensive and integrated description of flow in health and disease, it is necessary to characterise and model both normal and abnormal flows and their effects. To accommodate the diversity of flow patterns in relation to morphological and functional changes, we have described in this thesis an approach of detecting salient topological features prior to analytical assessment of dynamical indices of the flow patterns. To improve the accuracy of quantitative analysis of the evolution of topological flow features, it is essential to restore the original flow fields so that critical points associated with salient flow features can be more reliably detected. We propose a novel framework for the restoration, abstraction, extraction and tracking of flow features such that their dynamic indices can be accurately tracked and quantified. The restoration method is formulated as a constrained optimisation problem to remove the effects of noise and to improve the consistency of the MR velocity data. A computational scheme is derived from the First Order Lagrangian Method for solving the optimisation problem. After restoration, flow abstraction is applied to partition the entire flow field into clusters, each of which is represented by a local linear expansion of its velocity components. This process not only greatly reduces the amount of data required to encode the velocity distribution but also permits an analytical representation of the flow field from which critical points associated with salient flow features can be accurately extracted. After the critical points are extracted, phase portrait theory can be applied to separate them into attracting/repelling focuses, attracting/repelling nodes, planar vortex, or saddle. In this thesis, we have focused on vortical flow features formed in diastole. To track the movement of the vortices within a cardiac cycle, a tracking algorithm based on relaxation labelling is employed. The constraints and parameters used in the tracking algorithm are designed using the characteristics of the vortices. The proposed framework is validated with both simulated and in vivo data acquired from patients with sequential MR examination following myocardial infarction. The main contribution of the thesis is in the new vector field restoration and flow feature abstraction method proposed. They allow the accurate tracking and quantification of dynamic indices associated with salient features so that inter- and intra-subject comparisons can be more easily made. This provides further insight into the evolution of blood flow patterns and permits the establishment of links between blood flow patterns and localised genesis and progression of cardiovascular disease.Open acces

    Uncertainty Quantification and Reduction in Cardiac Electrophysiological Imaging

    Get PDF
    Cardiac electrophysiological (EP) imaging involves solving an inverse problem that infers cardiac electrical activity from body-surface electrocardiography data on a physical domain defined by the body torso. To avoid unreasonable solutions that may fit the data, this inference is often guided by data-independent prior assumptions about different properties of cardiac electrical sources as well as the physical domain. However, these prior assumptions may involve errors and uncertainties that could affect the inference accuracy. For example, common prior assumptions on the source properties, such as fixed spatial and/or temporal smoothness or sparseness assumptions, may not necessarily match the true source property at different conditions, leading to uncertainties in the inference. Furthermore, prior assumptions on the physical domain, such as the anatomy and tissue conductivity of different organs in the thorax model, represent an approximation of the physical domain, introducing errors to the inference. To determine the robustness of the EP imaging systems for future clinical practice, it is important to identify these errors/uncertainties and assess their impact on the solution. This dissertation focuses on the quantification and reduction of the impact of uncertainties caused by prior assumptions/models on cardiac source properties as well as anatomical modeling uncertainties on the EP imaging solution. To assess the effect of fixed prior assumptions/models about cardiac source properties on the solution of EP imaging, we propose a novel yet simple Lp-norm regularization method for volumetric cardiac EP imaging. This study reports the necessity of an adaptive prior model (rather than fixed model) for constraining the complex spatiotemporally changing properties of the cardiac sources. We then propose a multiple-model Bayesian approach to cardiac EP imaging that employs a continuous combination of prior models, each re-effecting a specific spatial property for volumetric sources. The 3D source estimation is then obtained as a weighted combination of solutions across all models. Including a continuous combination of prior models, our proposed method reduces the chance of mismatch between prior models and true source properties, which in turn enhances the robustness of the EP imaging solution. To quantify the impact of anatomical modeling uncertainties on the EP imaging solution, we propose a systematic statistical framework. Founded based on statistical shape modeling and unscented transform, our method quantifies anatomical modeling uncertainties and establish their relation to the EP imaging solution. Applied on anatomical models generated from different image resolutions and different segmentations, it reports the robustness of EP imaging solution to these anatomical shape-detail variations. We then propose a simplified anatomical model for the heart that only incorporates certain subject-specific anatomical parameters, while discarding local shape details. Exploiting less resources and processing for successful EP imaging, this simplified model provides a simple clinically-compatible anatomical modeling experience for EP imaging systems. Different components of our proposed methods are validated through a comprehensive set of synthetic and real-data experiments, including various typical pathological conditions and/or diagnostic procedures, such as myocardial infarction and pacing. Overall, the methods presented in this dissertation for the quantification and reduction of uncertainties in cardiac EP imaging enhance the robustness of EP imaging, helping to close the gap between EP imaging in research and its clinical application

    Machine learning approaches to model cardiac shape in large-scale imaging studies

    Get PDF
    Recent improvements in non-invasive imaging, together with the introduction of fully-automated segmentation algorithms and big data analytics, has paved the way for large-scale population-based imaging studies. These studies promise to increase our understanding of a large number of medical conditions, including cardiovascular diseases. However, analysis of cardiac shape in such studies is often limited to simple morphometric indices, ignoring large part of the information available in medical images. Discovery of new biomarkers by machine learning has recently gained traction, but often lacks interpretability. The research presented in this thesis aimed at developing novel explainable machine learning and computational methods capable of better summarizing shape variability, to better inform association and predictive clinical models in large-scale imaging studies. A powerful and flexible framework to model the relationship between three-dimensional (3D) cardiac atlases, encoding multiple phenotypic traits, and genetic variables is first presented. The proposed approach enables the detection of regional phenotype-genotype associations that would be otherwise neglected by conventional association analysis. Three learning-based systems based on deep generative models are then proposed. In the first model, I propose a classifier of cardiac shapes which exploits task-specific generative shape features, and it is designed to enable the visualisation of the anatomical effect these features encode in 3D, making the classification task transparent. The second approach models a database of anatomical shapes via a hierarchy of conditional latent variables and it is capable of detecting, quantifying and visualising onto a template shape the most discriminative anatomical features that characterize distinct clinical conditions. Finally, a preliminary analysis of a deep learning system capable of reconstructing 3D high-resolution cardiac segmentations from a sparse set of 2D views segmentations is reported. This thesis demonstrates that machine learning approaches can facilitate high-throughput analysis of normal and pathological anatomy and of its determinants without losing clinical interpretability.Open Acces

    Artificial Intelligence Applications in Cardiovascular Magnetic Resonance Imaging: Are We on the Path to Avoiding the Administration of Contrast Media?

    Get PDF
    In recent years, cardiovascular imaging examinations have experienced exponential growth due to technological innovation, and this trend is consistent with the most recent chest pain guidelines. Contrast media have a crucial role in cardiovascular magnetic resonance (CMR) imaging, allowing for more precise characterization of different cardiovascular diseases. However, contrast media have contraindications and side effects that limit their clinical application in determinant patients. The application of artificial intelligence (AI)-based techniques to CMR imaging has led to the development of non-contrast models. These AI models utilize non-contrast imaging data, either independently or in combination with clinical and demographic data, as input to generate diagnostic or prognostic algorithms. In this review, we provide an overview of the main concepts pertaining to AI, review the existing literature on non-contrast AI models in CMR, and finally, discuss the strengths and limitations of these AI models and their possible future development
    • …
    corecore