105 research outputs found
Anatomical Priors in Convolutional Networks for Unsupervised Biomedical Segmentation
We consider the problem of segmenting a biomedical image into anatomical
regions of interest. We specifically address the frequent scenario where we
have no paired training data that contains images and their manual
segmentations. Instead, we employ unpaired segmentation images to build an
anatomical prior. Critically these segmentations can be derived from imaging
data from a different dataset and imaging modality than the current task. We
introduce a generative probabilistic model that employs the learned prior
through a convolutional neural network to compute segmentations in an
unsupervised setting. We conducted an empirical analysis of the proposed
approach in the context of structural brain MRI segmentation, using a
multi-study dataset of more than 14,000 scans. Our results show that an
anatomical prior can enable fast unsupervised segmentation which is typically
not possible using standard convolutional networks. The integration of
anatomical priors can facilitate CNN-based anatomical segmentation in a range
of novel clinical problems, where few or no annotations are available and thus
standard networks are not trainable. The code is freely available at
http://github.com/adalca/neuron.Comment: Presented at CVPR 2018. IEEE CVPR proceedings pp. 9290-929
An Unsupervised Learning Model for Deformable Medical Image Registration
We present a fast learning-based algorithm for deformable, pairwise 3D
medical image registration. Current registration methods optimize an objective
function independently for each pair of images, which can be time-consuming for
large data. We define registration as a parametric function, and optimize its
parameters given a set of images from a collection of interest. Given a new
pair of scans, we can quickly compute a registration field by directly
evaluating the function using the learned parameters. We model this function
using a convolutional neural network (CNN), and use a spatial transform layer
to reconstruct one image from another while imposing smoothness constraints on
the registration field. The proposed method does not require supervised
information such as ground truth registration fields or anatomical landmarks.
We demonstrate registration accuracy comparable to state-of-the-art 3D image
registration, while operating orders of magnitude faster in practice. Our
method promises to significantly speed up medical image analysis and processing
pipelines, while facilitating novel directions in learning-based registration
and its applications. Our code is available at
https://github.com/balakg/voxelmorph .Comment: 9 pages, in CVPR 201
Gaussian Process Prior Variational Autoencoders
Variational autoencoders (VAE) are a powerful and widely-used class of models
to learn complex data distributions in an unsupervised fashion. One important
limitation of VAEs is the prior assumption that latent sample representations
are independent and identically distributed. However, for many important
datasets, such as time-series of images, this assumption is too strong:
accounting for covariances between samples, such as those in time, can yield to
a more appropriate model specification and improve performance in downstream
tasks. In this work, we introduce a new model, the Gaussian Process (GP) Prior
Variational Autoencoder (GPPVAE), to specifically address this issue. The
GPPVAE aims to combine the power of VAEs with the ability to model correlations
afforded by GP priors. To achieve efficient inference in this new class of
models, we leverage structure in the covariance matrix, and introduce a new
stochastic backpropagation strategy that allows for computing stochastic
gradients in a distributed and low-memory fashion. We show that our method
outperforms conditional VAEs (CVAEs) and an adaptation of standard VAEs in two
image data applications.Comment: Accepted at 32nd Conference on Neural Information Processing Systems
(NIPS 2018), Montr\'eal, Canad
Hyper-Convolution Networks for Biomedical Image Segmentation
The convolution operation is a central building block of neural network
architectures widely used in computer vision. The size of the convolution
kernels determines both the expressiveness of convolutional neural networks
(CNN), as well as the number of learnable parameters. Increasing the network
capacity to capture rich pixel relationships requires increasing the number of
learnable parameters, often leading to overfitting and/or lack of robustness.
In this paper, we propose a powerful novel building block, the
hyper-convolution, which implicitly represents the convolution kernel as a
function of kernel coordinates. Hyper-convolutions enable decoupling the kernel
size, and hence its receptive field, from the number of learnable parameters.
In our experiments, focused on challenging biomedical image segmentation tasks,
we demonstrate that replacing regular convolutions with hyper-convolutions
leads to more efficient architectures that achieve improved accuracy. Our
analysis also shows that learned hyper-convolutions are naturally regularized,
which can offer better generalization performance. We believe that
hyper-convolutions can be a powerful building block in future neural network
architectures for computer vision tasks. We provide all of our code here:
https://github.com/tym002/Hyper-ConvolutionComment: WACV 202
- …