53,033 research outputs found
Deep learning cardiac motion analysis for human survival prediction
Motion analysis is used in computer vision to understand the behaviour of
moving objects in sequences of images. Optimising the interpretation of dynamic
biological systems requires accurate and precise motion tracking as well as
efficient representations of high-dimensional motion trajectories so that these
can be used for prediction tasks. Here we use image sequences of the heart,
acquired using cardiac magnetic resonance imaging, to create time-resolved
three-dimensional segmentations using a fully convolutional network trained on
anatomical shape priors. This dense motion model formed the input to a
supervised denoising autoencoder (4Dsurvival), which is a hybrid network
consisting of an autoencoder that learns a task-specific latent code
representation trained on observed outcome data, yielding a latent
representation optimised for survival prediction. To handle right-censored
survival outcomes, our network used a Cox partial likelihood loss function. In
a study of 302 patients the predictive accuracy (quantified by Harrell's
C-index) was significantly higher (p < .0001) for our model C=0.73 (95 CI:
0.68 - 0.78) than the human benchmark of C=0.59 (95 CI: 0.53 - 0.65). This
work demonstrates how a complex computer vision task using high-dimensional
medical image data can efficiently predict human survival
Automatic segmentation of the left ventricle cavity and myocardium in MRI data
A novel approach for the automatic segmentation has been developed to extract the epi-cardium and endo-cardium boundaries of the left ventricle (lv) of the heart. The developed segmentation scheme takes multi-slice and multi-phase magnetic resonance (MR) images of the heart, transversing the short-axis length from the base to the apex. Each image is taken at one instance in the heart's phase. The images are segmented using a diffusion-based filter followed by an unsupervised clustering technique and the resulting labels are checked to locate the (lv) cavity. From cardiac anatomy, the closest pool of blood to the lv cavity is the right ventricle cavity. The wall between these two blood-pools (interventricular septum) is measured to give an approximate thickness for the myocardium. This value is used when a radial search is performed on a gradient image to find appropriate robust segments of the epi-cardium boundary. The robust edge segments are then joined using a normal spline curve. Experimental results are presented with very encouraging qualitative and quantitative results and a comparison is made against the state-of-the art level-sets method
Template-Cut: A Pattern-Based Segmentation Paradigm
We present a scale-invariant, template-based segmentation paradigm that sets
up a graph and performs a graph cut to separate an object from the background.
Typically graph-based schemes distribute the nodes of the graph uniformly and
equidistantly on the image, and use a regularizer to bias the cut towards a
particular shape. The strategy of uniform and equidistant nodes does not allow
the cut to prefer more complex structures, especially when areas of the object
are indistinguishable from the background. We propose a solution by introducing
the concept of a "template shape" of the target object in which the nodes are
sampled non-uniformly and non-equidistantly on the image. We evaluate it on
2D-images where the object's textures and backgrounds are similar, and large
areas of the object have the same gray level appearance as the background. We
also evaluate it in 3D on 60 brain tumor datasets for neurosurgical planning
purposes.Comment: 8 pages, 6 figures, 3 tables, 6 equations, 51 reference
Self Super-Resolution for Magnetic Resonance Images using Deep Networks
High resolution magnetic resonance~(MR) imaging~(MRI) is desirable in many
clinical applications, however, there is a trade-off between resolution, speed
of acquisition, and noise. It is common for MR images to have worse
through-plane resolution~(slice thickness) than in-plane resolution. In these
MRI images, high frequency information in the through-plane direction is not
acquired, and cannot be resolved through interpolation. To address this issue,
super-resolution methods have been developed to enhance spatial resolution. As
an ill-posed problem, state-of-the-art super-resolution methods rely on the
presence of external/training atlases to learn the transform from low
resolution~(LR) images to high resolution~(HR) images. For several reasons,
such HR atlas images are often not available for MRI sequences. This paper
presents a self super-resolution~(SSR) algorithm, which does not use any
external atlas images, yet can still resolve HR images only reliant on the
acquired LR image. We use a blurred version of the input image to create
training data for a state-of-the-art super-resolution deep network. The trained
network is applied to the original input image to estimate the HR image. Our
SSR result shows a significant improvement on through-plane resolution compared
to competing SSR methods.Comment: Accepted by IEEE International Symposium on Biomedical Imaging (ISBI)
201
- …