3,797 research outputs found
Quicksilver: Fast Predictive Image Registration - a Deep Learning Approach
This paper introduces Quicksilver, a fast deformable image registration
method. Quicksilver registration for image-pairs works by patch-wise prediction
of a deformation model based directly on image appearance. A deep
encoder-decoder network is used as the prediction model. While the prediction
strategy is general, we focus on predictions for the Large Deformation
Diffeomorphic Metric Mapping (LDDMM) model. Specifically, we predict the
momentum-parameterization of LDDMM, which facilitates a patch-wise prediction
strategy while maintaining the theoretical properties of LDDMM, such as
guaranteed diffeomorphic mappings for sufficiently strong regularization. We
also provide a probabilistic version of our prediction network which can be
sampled during the testing time to calculate uncertainties in the predicted
deformations. Finally, we introduce a new correction network which greatly
increases the prediction accuracy of an already existing prediction network. We
show experimental results for uni-modal atlas-to-image as well as uni- / multi-
modal image-to-image registrations. These experiments demonstrate that our
method accurately predicts registrations obtained by numerical optimization, is
very fast, achieves state-of-the-art registration results on four standard
validation datasets, and can jointly learn an image similarity measure.
Quicksilver is freely available as an open-source software.Comment: Add new discussion
Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz
The reconstruction of dense 3D models of face geometry and appearance from a
single image is highly challenging and ill-posed. To constrain the problem,
many approaches rely on strong priors, such as parametric face models learned
from limited 3D scan data. However, prior models restrict generalization of the
true diversity in facial geometry, skin reflectance and illumination. To
alleviate this problem, we present the first approach that jointly learns 1) a
regressor for face shape, expression, reflectance and illumination on the basis
of 2) a concurrently learned parametric face model. Our multi-level face model
combines the advantage of 3D Morphable Models for regularization with the
out-of-space generalization of a learned corrective space. We train end-to-end
on in-the-wild images without dense annotations by fusing a convolutional
encoder with a differentiable expert-designed renderer and a self-supervised
training loss, both defined at multiple detail levels. Our approach compares
favorably to the state-of-the-art in terms of reconstruction quality, better
generalizes to real world faces, and runs at over 250 Hz.Comment: CVPR 2018 (Oral). Project webpage:
https://gvv.mpi-inf.mpg.de/projects/FML
Stratified decision forests for accurate anatomical landmark localization in cardiac images
Accurate localization of anatomical landmarks is an important step in medical imaging, as it provides useful prior information for subsequent image analysis and acquisition methods. It is particularly useful for initialization of automatic image analysis tools (e.g. segmentation and registration) and detection of scan planes for automated image acquisition. Landmark localization has been commonly performed using learning based approaches, such as classifier and/or regressor models. However, trained models may not generalize well in heterogeneous datasets when the images contain large differences due to size, pose and shape variations of organs. To learn more data-adaptive and patient specific models, we propose a novel stratification based training model, and demonstrate its use in a decision forest. The proposed approach does not require any additional training information compared to the standard model training procedure and can be easily integrated into any decision tree framework. The proposed method is evaluated on 1080 3D highresolution and 90 multi-stack 2D cardiac cine MR images. The experiments show that the proposed method achieves state-of-theart landmark localization accuracy and outperforms standard regression and classification based approaches. Additionally, the proposed method is used in a multi-atlas segmentation to create a fully automatic segmentation pipeline, and the results show that it achieves state-of-the-art segmentation accuracy
SAME++: A Self-supervised Anatomical eMbeddings Enhanced medical image registration framework using stable sampling and regularized transformation
Image registration is a fundamental medical image analysis task. Ideally,
registration should focus on aligning semantically corresponding voxels, i.e.,
the same anatomical locations. However, existing methods often optimize
similarity measures computed directly on intensities or on hand-crafted
features, which lack anatomical semantic information. These similarity measures
may lead to sub-optimal solutions where large deformations, complex anatomical
differences, or cross-modality imagery exist. In this work, we introduce a fast
and accurate method for unsupervised 3D medical image registration building on
top of a Self-supervised Anatomical eMbedding (SAM) algorithm, which is capable
of computing dense anatomical correspondences between two images at the voxel
level. We name our approach SAM-Enhanced registration (SAME++), which
decomposes image registration into four steps: affine transformation, coarse
deformation, deep non-parametric transformation, and instance optimization.
Using SAM embeddings, we enhance these steps by finding more coherent
correspondence and providing features with better semantic guidance. We
extensively evaluated SAME++ using more than 50 labeled organs on three
challenging inter-subject registration tasks of different body parts. As a
complete registration framework, SAME++ markedly outperforms leading methods by
- in terms of Dice score while being orders of magnitude faster
than numerical optimization-based methods. Code is available at
\url{https://github.com/alibaba-damo-academy/same}
- …