1,075 research outputs found
Adversarial Deformation Regularization for Training Image Registration Neural Networks
We describe an adversarial learning approach to constrain convolutional
neural network training for image registration, replacing heuristic smoothness
measures of displacement fields often used in these tasks. Using
minimally-invasive prostate cancer intervention as an example application, we
demonstrate the feasibility of utilizing biomechanical simulations to
regularize a weakly-supervised anatomical-label-driven registration network for
aligning pre-procedural magnetic resonance (MR) and 3D intra-procedural
transrectal ultrasound (TRUS) images. A discriminator network is optimized to
distinguish the registration-predicted displacement fields from the motion data
simulated by finite element analysis. During training, the registration network
simultaneously aims to maximize similarity between anatomical labels that
drives image alignment and to minimize an adversarial generator loss that
measures divergence between the predicted- and simulated deformation. The
end-to-end trained network enables efficient and fully-automated registration
that only requires an MR and TRUS image pair as input, without anatomical
labels or simulated data during inference. 108 pairs of labelled MR and TRUS
images from 76 prostate cancer patients and 71,500 nonlinear finite-element
simulations from 143 different patients were used for this study. We show that,
with only gland segmentation as training labels, the proposed method can help
predict physically plausible deformation without any other smoothness penalty.
Based on cross-validation experiments using 834 pairs of independent validation
landmarks, the proposed adversarial-regularized registration achieved a target
registration error of 6.3 mm that is significantly lower than those from
several other regularization methods.Comment: Accepted to MICCAI 201
Label-driven weakly-supervised learning for multimodal deformable image registration
Spatially aligning medical images from different modalities remains a
challenging task, especially for intraoperative applications that require fast
and robust algorithms. We propose a weakly-supervised, label-driven formulation
for learning 3D voxel correspondence from higher-level label correspondence,
thereby bypassing classical intensity-based image similarity measures. During
training, a convolutional neural network is optimised by outputting a dense
displacement field (DDF) that warps a set of available anatomical labels from
the moving image to match their corresponding counterparts in the fixed image.
These label pairs, including solid organs, ducts, vessels, point landmarks and
other ad hoc structures, are only required at training time and can be
spatially aligned by minimising a cross-entropy function of the warped moving
label and the fixed label. During inference, the trained network takes a new
image pair to predict an optimal DDF, resulting in a fully-automatic,
label-free, real-time and deformable registration. For interventional
applications where large global transformation prevails, we also propose a
neural network architecture to jointly optimise the global- and local
displacements. Experiment results are presented based on cross-validating
registrations of 111 pairs of T2-weighted magnetic resonance images and 3D
transrectal ultrasound images from prostate cancer patients with a total of
over 4000 anatomical labels, yielding a median target registration error of 4.2
mm on landmark centroids and a median Dice of 0.88 on prostate glands.Comment: Accepted to ISBI 201
Deep learning in remote sensing: a review
Standing at the paradigm shift towards data-intensive science, machine
learning techniques are becoming increasingly important. In particular, as a
major breakthrough in the field, deep learning has proven as an extremely
powerful tool in many fields. Shall we embrace deep learning as the key to all?
Or, should we resist a 'black-box' solution? There are controversial opinions
in the remote sensing community. In this article, we analyze the challenges of
using deep learning for remote sensing data analysis, review the recent
advances, and provide resources to make deep learning in remote sensing
ridiculously simple to start with. More importantly, we advocate remote sensing
scientists to bring their expertise into deep learning, and use it as an
implicit general model to tackle unprecedented large-scale influential
challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin
- …