870 research outputs found

    Integrating Contour-Coupling with Spatio-Temporal Models in Multi-Dimensional Cardiac Image Segmentation

    Get PDF

    A novel model-based 3D+time left ventricular segmentation technique

    Get PDF
    A common approach to model-based segmentation is to assume a top-down modelling strategy. However, this is not feasible for complex 3D+time structures such as the cardiac left ventricle due to increased training requirements, aligning difficulties and local minima in resulting models. As our main contribution, we present an alternate bottom-up modelling approach. By combining the variation captured in multiple dimensionally-targeted models at segmentation-time we create a scalable segmentation framework that does not suffer from the ’curse of dimensionality’. Our second contribution involves a flexible contour coupling technique that allows our segmentation method to adapt to unseen contour configurations outside the training set. This is used to identify the endo- and epi-cardium contours of the left ventricle by coupling them at segmentationtime, instead of at model-time. We apply our approach to 33 3D+time MRI cardiac datasets and perform comprehensive evaluation against several state-of-the-art works. Quantitative evaluation illustrates that our method requires significantly less training than state-of-the-art model-based methods, while maintaining or improving segmentation accuracy

    CINENet: deep learning-based 3D cardiac CINE MRI reconstruction with multi-coil complex-valued 4D spatio-temporal convolutions

    Get PDF
    Cardiac CINE magnetic resonance imaging is the gold-standard for the assessment of cardiac function. Imaging accelerations have shown to enable 3D CINE with left ventricular (LV) coverage in a single breath-hold. However, 3D imaging remains limited to anisotropic resolution and long reconstruction times. Recently deep learning has shown promising results for computationally efficient reconstructions of highly accelerated 2D CINE imaging. In this work, we propose a novel 4D (3D + time) deep learning-based reconstruction network, termed 4D CINENet, for prospectively undersampled 3D Cartesian CINE imaging. CINENet is based on (3 + 1)D complex-valued spatio-temporal convolutions and multi-coil data processing. We trained and evaluated the proposed CINENet on in-house acquired 3D CINE data of 20 healthy subjects and 15 patients with suspected cardiovascular disease. The proposed CINENet network outperforms iterative reconstructions in visual image quality and contrast (+ 67% improvement). We found good agreement in LV function (bias ± 95% confidence) in terms of end-systolic volume (0 ± 3.3 ml), end-diastolic volume (- 0.4 ± 2.0 ml) and ejection fraction (0.1 ± 3.2%) compared to clinical gold-standard 2D CINE, enabling single breath-hold isotropic 3D CINE in less than 10 s scan and ~ 5 s reconstruction time

    STGP: Spatio-temporal Gaussian process models for longitudinal neuroimaging data

    Get PDF
    Longitudinal neuroimaging data plays an important role in mapping the neural developmental profile of major neuropsychiatric and neurodegenerative disorders and normal brain. The development of such developmental maps is critical for the prevention, diagnosis, and treatment of many brain-related diseases. The aim of this paper is to develop a spatio-temporal Gaussian process (STGP) framework to accurately delineate the developmental trajectories of brain structure and function, while achieving better prediction by explicitly incorporating the spatial and temporal features of longitudinal neuroimaging data. Our STGP integrates a functional principal component model (FPCA) and a partition parametric space-time covariance model to capture the medium-to-large and small-to-medium spatio-temporal dependence structures, respectively. We develop a three-stage efficient estimation procedure as well as a predictive method based on a kriging technique. Two key novelties of STGP are that it can efficiently use a small number of parameters to capture complex non-stationary and non-separable spatio-temporal dependence structures and that it can accurately predict spatio-temporal changes. We illustrate STGP using simulated data sets and two real data analyses including longitudinal positron emission tomography data from the Alzheimers Disease Neuroimaging Initiative (ADNI) and longitudinal lateral ventricle surface data from a longitudinal study of early brain development

    Deep learning cardiac motion analysis for human survival prediction

    Get PDF
    Motion analysis is used in computer vision to understand the behaviour of moving objects in sequences of images. Optimising the interpretation of dynamic biological systems requires accurate and precise motion tracking as well as efficient representations of high-dimensional motion trajectories so that these can be used for prediction tasks. Here we use image sequences of the heart, acquired using cardiac magnetic resonance imaging, to create time-resolved three-dimensional segmentations using a fully convolutional network trained on anatomical shape priors. This dense motion model formed the input to a supervised denoising autoencoder (4Dsurvival), which is a hybrid network consisting of an autoencoder that learns a task-specific latent code representation trained on observed outcome data, yielding a latent representation optimised for survival prediction. To handle right-censored survival outcomes, our network used a Cox partial likelihood loss function. In a study of 302 patients the predictive accuracy (quantified by Harrell's C-index) was significantly higher (p < .0001) for our model C=0.73 (95%\% CI: 0.68 - 0.78) than the human benchmark of C=0.59 (95%\% CI: 0.53 - 0.65). This work demonstrates how a complex computer vision task using high-dimensional medical image data can efficiently predict human survival

    Accelerating cardiovascular MRI

    Get PDF
    corecore