64,229 research outputs found
Deep Autoencoding Models for Unsupervised Anomaly Segmentation in Brain MR Images
Reliably modeling normality and differentiating abnormal appearances from
normal cases is a very appealing approach for detecting pathologies in medical
images. A plethora of such unsupervised anomaly detection approaches has been
made in the medical domain, based on statistical methods, content-based
retrieval, clustering and recently also deep learning. Previous approaches
towards deep unsupervised anomaly detection model patches of normal anatomy
with variants of Autoencoders or GANs, and detect anomalies either as outliers
in the learned feature space or from large reconstruction errors. In contrast
to these patch-based approaches, we show that deep spatial autoencoding models
can be efficiently used to capture normal anatomical variability of entire 2D
brain MR images. A variety of experiments on real MR data containing MS lesions
corroborates our hypothesis that we can detect and even delineate anomalies in
brain MR images by simply comparing input images to their reconstruction.
Results show that constraints on the latent space and adversarial training can
further improve the segmentation performance over standard deep representation
learning
Binary segmentation of medical images using implicit spline representations and deep learning
We propose a novel approach to image segmentation based on combining implicit
spline representations with deep convolutional neural networks. This is done by
predicting the control points of a bivariate spline function whose zero-set
represents the segmentation boundary. We adapt several existing neural network
architectures and design novel loss functions that are tailored towards
providing implicit spline curve approximations. The method is evaluated on a
congenital heart disease computed tomography medical imaging dataset.
Experiments are carried out by measuring performance in various standard
metrics for different networks and loss functions. We determine that splines of
bidegree with coefficient resolution performed optimally
for resolution CT images. For our best network, we achieve an
average volumetric test Dice score of almost 92%, which reaches the state of
the art for this congenital heart disease dataset.Comment: 17 pages, 5 figure
Multi-object segmentation using coupled nonparametric shape and relative pose priors
We present a new method for multi-object segmentation in a maximum a posteriori estimation framework. Our method is motivated by the observation that neighboring or coupling objects in images generate configurations and co-dependencies which could potentially aid in segmentation if properly exploited. Our approach employs coupled shape and inter-shape pose priors that are computed using training images in a nonparametric multi-variate kernel density estimation framework. The coupled shape prior is obtained by estimating the joint shape distribution of multiple objects and the inter-shape pose priors are modeled via standard moments. Based on such statistical models, we formulate an optimization problem for segmentation, which we solve by an algorithm based on active contours. Our technique provides significant improvements in the segmentation of weakly contrasted objects in a number of applications. In particular for medical image analysis, we use our method to extract brain Basal Ganglia structures, which are members of a complex multi-object system posing a challenging segmentation problem. We also apply our technique to the problem of handwritten character segmentation. Finally, we use our method to segment cars in urban scenes
Towards segmentation and spatial alignment of the human embryonic brain using deep learning for atlas-based registration
We propose an unsupervised deep learning method for atlas based registration
to achieve segmentation and spatial alignment of the embryonic brain in a
single framework. Our approach consists of two sequential networks with a
specifically designed loss function to address the challenges in 3D first
trimester ultrasound. The first part learns the affine transformation and the
second part learns the voxelwise nonrigid deformation between the target image
and the atlas. We trained this network end-to-end and validated it against a
ground truth on synthetic datasets designed to resemble the challenges present
in 3D first trimester ultrasound. The method was tested on a dataset of human
embryonic ultrasound volumes acquired at 9 weeks gestational age, which showed
alignment of the brain in some cases and gave insight in open challenges for
the proposed method. We conclude that our method is a promising approach
towards fully automated spatial alignment and segmentation of embryonic brains
in 3D ultrasound
Template-Cut: A Pattern-Based Segmentation Paradigm
We present a scale-invariant, template-based segmentation paradigm that sets
up a graph and performs a graph cut to separate an object from the background.
Typically graph-based schemes distribute the nodes of the graph uniformly and
equidistantly on the image, and use a regularizer to bias the cut towards a
particular shape. The strategy of uniform and equidistant nodes does not allow
the cut to prefer more complex structures, especially when areas of the object
are indistinguishable from the background. We propose a solution by introducing
the concept of a "template shape" of the target object in which the nodes are
sampled non-uniformly and non-equidistantly on the image. We evaluate it on
2D-images where the object's textures and backgrounds are similar, and large
areas of the object have the same gray level appearance as the background. We
also evaluate it in 3D on 60 brain tumor datasets for neurosurgical planning
purposes.Comment: 8 pages, 6 figures, 3 tables, 6 equations, 51 reference
- …