671 research outputs found

    Automated Segmentation of Left and Right Ventricles in MRI and Classification of the Myocarfium Abnormalities

    Get PDF
    A fundamental step in diagnosis of cardiovascular diseases, automated left and right ventricle (LV and RV) segmentation in cardiac magnetic resonance images (MRI) is still acknowledged to be a difficult problem. Although algorithms for LV segmentation do exist, they require either extensive training or intensive user inputs. RV segmentation in MRI has yet to be solved and is still acknowledged a completely unsolved problem because its shape is not symmetric and circular, its deformations are complex and varies extensively over the cardiac phases, and it includes papillary muscles. In this thesis, I investigate fast detection of the LV endo- and epi-cardium surfaces (3D) and contours (2D) in cardiac MRI via convex relaxation and distribution matching. A rapid 3D segmentation of the RV in cardiac MRI via distribution matching constraints on segment shape and appearance is also investigated. These algorithms only require a single subject for training and a very simple user input, which amounts to one click. The solution is sought following the optimization of functionals containing probability product kernel constraints on the distributions of intensity and geometric features. The formulations lead to challenging optimization problems, which are not directly amenable to convex-optimization techniques. For each functional, the problem is split into a sequence of sub-problems, each of which can be solved exactly and globally via a convex relaxation and the augmented Lagrangian method. Finally, an information-theoretic based artificial neural network (ANN) is proposed for normal/abnormal LV myocardium motion classification. Using the LV segmentation results, the LV cavity points is estimated via a Kalman filter and a recursive dynamic Bayesian filter. However, due to the similarities between the statistical information of normal and abnormal points, differentiating between distributions of abnormal and normal points is a challenging problem. The problem was investigated with a global measure based on the Shannon\u27s differential entropy (SDE) and further examined with two other information-theoretic criteria, one based on Renyi entropy and the other on Fisher information. Unlike the existing information-theoretic studies, the approach addresses explicitly the overlap between the distributions of normal and abnormal cases, thereby yielding a competitive performance. I further propose an algorithm based on a supervised 3-layer ANN to differentiate between the distributions farther. The ANN is trained and tested by five different information measures of radial distance and velocity for points on endocardial boundary

    HDL: hybrid deep learning for the synthesis of myocardial velocity maps in digital twins for cardiac analysis

    Get PDF
    Synthetic digital twins based on medical data accelerate the acquisition, labelling and decision making procedure in digital healthcare. A core part of digital healthcare twins is model based data synthesis, which permits the generation of realistic medical signals without requiring to cope with the modelling complexity of anatomical and biochemical phenomena producing them in reality. Unfortunately, algorithms for cardiac data synthesis have been so far scarcely studied in the literature. An important imaging modality in the cardiac examination is three-directional CINE multi-slice myocardial velocity mapping (3Dir MVM), which provides a quantitative assessment of cardiac motion in three orthogonal directions of the left ventricle. The long acquisition time and complex acquisition produce make it more urgent to produce synthetic digital twins of this imaging modality. In this study, we propose a hybrid deep learning (HDL) network, especially for synthetic 3Dir MVM data. Our algorithm is featured by a hybrid UNet and a Generative Adversarial Network with a foreground-background generation scheme. The experimental results show that from temporally down-sampled magnitude CINE images (six times), our proposed algorithm can still successfully synthesise high temporal resolution 3Dir MVM CMR data (PSNR=42.32) with precise left ventricle segmentation (DICE=0.92). These performance scores indicate that our proposed HDL algorithm can be implemented in real-world digital twins for myocardial velocity mapping data simulation. To the best of our knowledge, this work is the first one in the literature investigating digital twins of the 3Dir MVM CMR, which has shown great potential for improving the efficiency of clinical studies via synthesised cardiac data

    09302 Abstracts Collection -- New Developments in the Visualization and Processing of Tensor Fields

    Get PDF
    From 19.07. to 24.07.2009, the Dagstuhl Seminar 09302 ``New Developments in the Visualization and Processing of Tensor Fields \u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Image based approach for early assessment of heart failure.

    Get PDF
    In diagnosing heart diseases, the estimation of cardiac performance indices requires accurate segmentation of the left ventricle (LV) wall from cine cardiac magnetic resonance (CMR) images. MR imaging is noninvasive and generates clear images; however, it is impractical to manually process the huge number of images generated to calculate the performance indices. In this dissertation, we introduce a novel, fast, robust, bi-directional coupled parametric deformable models that are capable of segmenting the LV wall borders using first- and second-order visual appearance features. These features are embedded in a new stochastic external force that preserves the topology of the LV wall to track the evolution of the parametric deformable models control points. We tested the proposed segmentation approach on 15 data sets in 6 infarction patients using the Dice similarity coefficient (DSC) and the average distance (AD) between the ground truth and automated segmentation contours. Our approach achieves a mean DSC value of 0.926±0.022 and mean AD value of 2.16±0.60 mm compared to two other level set methods that achieve mean DSC values of 0.904±0.033 and 0.885±0.02; and mean AD values of 2.86±1.35 mm and 5.72±4.70 mm, respectively. Also, a novel framework for assessing both 3D functional strain and wall thickening from 4D cine cardiac magnetic resonance imaging (CCMR) is introduced. The introduced approach is primarily based on using geometrical features to track the LV wall during the cardiac cycle. The 4D tracking approach consists of the following two main steps: (i) Initially, the surface points on the LV wall are tracked by solving a 3D Laplace equation between two subsequent LV surfaces; and (ii) Secondly, the locations of the tracked LV surface points are iteratively adjusted through an energy minimization cost function using a generalized Gauss-Markov random field (GGMRF) image model in order to remove inconsistencies and preserve the anatomy of the heart wall during the tracking process. Then the circumferential strains are straight forward calculated from the location of the tracked LV surface points. In addition, myocardial wall thickening is estimated by co-allocation of the corresponding points, or matches between the endocardium and epicardium surfaces of the LV wall using the solution of the 3D laplace equation. Experimental results on in vivo data confirm the accuracy and robustness of our method. Moreover, the comparison results demonstrate that our approach outperforms 2D wall thickening estimation approaches

    Deep learning cardiac motion analysis for human survival prediction

    Get PDF
    Motion analysis is used in computer vision to understand the behaviour of moving objects in sequences of images. Optimising the interpretation of dynamic biological systems requires accurate and precise motion tracking as well as efficient representations of high-dimensional motion trajectories so that these can be used for prediction tasks. Here we use image sequences of the heart, acquired using cardiac magnetic resonance imaging, to create time-resolved three-dimensional segmentations using a fully convolutional network trained on anatomical shape priors. This dense motion model formed the input to a supervised denoising autoencoder (4Dsurvival), which is a hybrid network consisting of an autoencoder that learns a task-specific latent code representation trained on observed outcome data, yielding a latent representation optimised for survival prediction. To handle right-censored survival outcomes, our network used a Cox partial likelihood loss function. In a study of 302 patients the predictive accuracy (quantified by Harrell's C-index) was significantly higher (p < .0001) for our model C=0.73 (95%\% CI: 0.68 - 0.78) than the human benchmark of C=0.59 (95%\% CI: 0.53 - 0.65). This work demonstrates how a complex computer vision task using high-dimensional medical image data can efficiently predict human survival

    Two-Stage Deep Learning Framework for Quality Assessment of Left Atrial Late Gadolinium Enhanced MRI Images

    Full text link
    Accurate assessment of left atrial fibrosis in patients with atrial fibrillation relies on high-quality 3D late gadolinium enhancement (LGE) MRI images. However, obtaining such images is challenging due to patient motion, changing breathing patterns, or sub-optimal choice of pulse sequence parameters. Automated assessment of LGE-MRI image diagnostic quality is clinically significant as it would enhance diagnostic accuracy, improve efficiency, ensure standardization, and contributes to better patient outcomes by providing reliable and high-quality LGE-MRI scans for fibrosis quantification and treatment planning. To address this, we propose a two-stage deep-learning approach for automated LGE-MRI image diagnostic quality assessment. The method includes a left atrium detector to focus on relevant regions and a deep network to evaluate diagnostic quality. We explore two training strategies, multi-task learning, and pretraining using contrastive learning, to overcome limited annotated data in medical imaging. Contrastive Learning result shows about 4%4\%, and 9%9\% improvement in F1-Score and Specificity compared to Multi-Task learning when there's limited data.Comment: Accepted to STACOM 2023. 11 pages, 3 figure

    Constrained manifold learning for the characterization of pathological deviations from normality

    Get PDF
    International audienceThis paper describes a technique to (1) learn the representation of a pathological motion pattern from a given population, and (2) compare individuals to this population. Our hypothesis is that this pattern can be modeled as a deviation from normal motion by means of non-linear embedding techniques. Each subject is represented by a 2D map of local motion abnormalities, obtained from a statistical atlas of myocardial motion built from a healthy population. The algorithm estimates a manifold from a set of patients with varying degrees of the same disease, and compares individuals to the training population using a mapping to the manifold and a distance to normality along the manifold. The approach extends recent manifold learning techniques by constraining the manifold to pass by a physiologically meaningful origin representing a normal motion pattern. Interpolation techniques using locally adjustable kernel improve the accuracy of the method. The technique is applied in the context of cardiac resynchronization therapy (CRT), focusing on a specific motion pattern of intra-ventricular dyssynchrony called septal flash (SF). We estimate the manifold from 50 CRT candidates with SF and test it on 37 CRT candidates and 21 healthy volunteers. Experiments highlight the relevance of nonlinear techniques to model a pathological pattern from the training set and compare new individuals to this pattern

    A Deep Learning Approach to Integrate Medical Big Data for Improving Health Services in Indonesia

    Get PDF
    Medical Informatics to support health services in Indonesia is proposed in this paper. The focuses of paper to the analysis of Big Data for health care purposes with the aim of improving and developing clinical decision support systems (CDSS) or assessing medical data both for quality assurance and accessibility of health services. Electronic health records (EHR) are very rich in medical data sourced from patient. All the data can be aggregated to produce information, which includes medical history details such as, diagnostic tests, medicines and treatment plans, immunization records, allergies, radiological images, multivariate sensors device, laboratories, and test results. All the information will provide a valuable understanding of disease management system. In Indonesia country, with many rural areas with limited doctor it is an important case to investigate. Data mining about large-scale individuals and populations through EHRs can be combined with mobile networks and social media to inform about health and public policy. To support this research, many researchers have been applied the Deep Learning (DL) approach in data-mining problems related to health informatics. However, in practice, the use of DL is still questionable due to achieve optimal performance, relatively large data and resources are needed, given there are other learning algorithms that are relatively fast but produce close performance with fewer resources and parameterization, and have a better interpretability. In this paper, the advantage of Deep Learning to design medical informatics is described, due to such an approach is needed to make a good CDSS of health services

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity

    Artificial Intelligence Will Transform Cardiac Imaging-Opportunities and Challenges

    Get PDF
    National Institute for Health Research (NIHR) Cardiovascular Biomedical Research Center at BartsSmartHeart EPSRC programme grant (www.nihr.ac.uk; EP/P001009/1)London Medical Imaging and AI Center for Value-Based HealthcareCAP-AI programmeEuropean Union's Horizon 2020 research and innovation programme under grant agreement No 825903
    corecore