11,118 research outputs found
Explainable cardiac pathology classification on cine MRI with motion characterization by semi-supervised learning of apparent flow
We propose a method to classify cardiac pathology based on a novel approach
to extract image derived features to characterize the shape and motion of the
heart. An original semi-supervised learning procedure, which makes efficient
use of a large amount of non-segmented images and a small amount of images
segmented manually by experts, is developed to generate pixel-wise apparent
flow between two time points of a 2D+t cine MRI image sequence. Combining the
apparent flow maps and cardiac segmentation masks, we obtain a local apparent
flow corresponding to the 2D motion of myocardium and ventricular cavities.
This leads to the generation of time series of the radius and thickness of
myocardial segments to represent cardiac motion. These time series of motion
features are reliable and explainable characteristics of pathological cardiac
motion. Furthermore, they are combined with shape-related features to classify
cardiac pathologies. Using only nine feature values as input, we propose an
explainable, simple and flexible model for pathology classification. On ACDC
training set and testing set, the model achieves 95% and 94% respectively as
classification accuracy. Its performance is hence comparable to that of the
state-of-the-art. Comparison with various other models is performed to outline
some advantages of our model
Deep learning cardiac motion analysis for human survival prediction
Motion analysis is used in computer vision to understand the behaviour of
moving objects in sequences of images. Optimising the interpretation of dynamic
biological systems requires accurate and precise motion tracking as well as
efficient representations of high-dimensional motion trajectories so that these
can be used for prediction tasks. Here we use image sequences of the heart,
acquired using cardiac magnetic resonance imaging, to create time-resolved
three-dimensional segmentations using a fully convolutional network trained on
anatomical shape priors. This dense motion model formed the input to a
supervised denoising autoencoder (4Dsurvival), which is a hybrid network
consisting of an autoencoder that learns a task-specific latent code
representation trained on observed outcome data, yielding a latent
representation optimised for survival prediction. To handle right-censored
survival outcomes, our network used a Cox partial likelihood loss function. In
a study of 302 patients the predictive accuracy (quantified by Harrell's
C-index) was significantly higher (p < .0001) for our model C=0.73 (95 CI:
0.68 - 0.78) than the human benchmark of C=0.59 (95 CI: 0.53 - 0.65). This
work demonstrates how a complex computer vision task using high-dimensional
medical image data can efficiently predict human survival
Improving the Accuracy of CT-derived Attenuation Correction in Respiratory-Gated PET/CT Imaging
The effect of respiratory motion on attenuation correction in Fludeoxyglucose (18F) positron emission tomography (FDG-PET) was investigated. Improvements to the accuracy of computed tomography (CT) derived attenuation correction were obtained through the alignment of the attenuation map to each emission image in a respiratory gated PET scan. Attenuation misalignment leads to artefacts in the reconstructed PET image and several methods were devised for evaluating the attenuation inaccuracies caused by this. These methods of evaluation were extended to finding the frame in the respiratory gated PET which best matched the CT. This frame was then used as a reference frame in mono-modality compensation for misalignment. Attenuation correction was found to affect the quantification of tumour volumes; thus a regional analysis was used to evaluate the impact of mismatch and the benefits of compensating for misalignment. Deformable image registration was used to compensate for misalignment, however, there were inaccuracies caused by the poor signal-to-noise ratio (SNR) in PET images. Two models were developed that were robust to a poor SNR allowing for the estimation of deformation from very noisy images. Firstly, a cross population model was developed by statistically analysing the respiratory motion in 10 4DCT scans. Secondly, a 1D model of respiration was developed based on the physiological function of respiration. The 1D approach correctly modelled the expansion and contraction of the lungs and the differences in the compressibility of lungs and surrounding tissues. Several additional models were considered but were ruled out based on their poor goodness of fit to 4DCT scans. Approaches to evaluating the developed models were also used to assist with optimising for the most accurate attenuation correction. It was found that the multimodality registration of the CT image to the PET image was the most accurate approach to compensating for attenuation correction mismatch. Mono-modality image registration was found to be the least accurate approach, however, incorporating a motion model improved the accuracy of image registration. The significance of these findings is twofold. Firstly, it was found that motion models are required to improve the accuracy in compensating for attenuation correction mismatch and secondly, a validation method was found for comparing approaches to compensating for attenuation mismatch
Improving the Accuracy of CT-derived Attenuation Correction in Respiratory-Gated PET/CT Imaging
The effect of respiratory motion on attenuation correction in Fludeoxyglucose (18F) positron emission tomography (FDG-PET) was investigated. Improvements to the accuracy of computed tomography (CT) derived attenuation correction were obtained through the alignment of the attenuation map to each emission image in a respiratory gated PET scan. Attenuation misalignment leads to artefacts in the reconstructed PET image and several methods were devised for evaluating the attenuation inaccuracies caused by this. These methods of evaluation were extended to finding the frame in the respiratory gated PET which best matched the CT. This frame was then used as a reference frame in mono-modality compensation for misalignment. Attenuation correction was found to affect the quantification of tumour volumes; thus a regional analysis was used to evaluate the impact of mismatch and the benefits of compensating for misalignment. Deformable image registration was used to compensate for misalignment, however, there were inaccuracies caused by the poor signal-to-noise ratio (SNR) in PET images. Two models were developed that were robust to a poor SNR allowing for the estimation of deformation from very noisy images. Firstly, a cross population model was developed by statistically analysing the respiratory motion in 10 4DCT scans. Secondly, a 1D model of respiration was developed based on the physiological function of respiration. The 1D approach correctly modelled the expansion and contraction of the lungs and the differences in the compressibility of lungs and surrounding tissues. Several additional models were considered but were ruled out based on their poor goodness of fit to 4DCT scans. Approaches to evaluating the developed models were also used to assist with optimising for the most accurate attenuation correction. It was found that the multimodality registration of the CT image to the PET image was the most accurate approach to compensating for attenuation correction mismatch. Mono-modality image registration was found to be the least accurate approach, however, incorporating a motion model improved the accuracy of image registration. The significance of these findings is twofold. Firstly, it was found that motion models are required to improve the accuracy in compensating for attenuation correction mismatch and secondly, a validation method was found for comparing approaches to compensating for attenuation mismatch
- …