305 research outputs found
Adversarial Deformation Regularization for Training Image Registration Neural Networks
We describe an adversarial learning approach to constrain convolutional
neural network training for image registration, replacing heuristic smoothness
measures of displacement fields often used in these tasks. Using
minimally-invasive prostate cancer intervention as an example application, we
demonstrate the feasibility of utilizing biomechanical simulations to
regularize a weakly-supervised anatomical-label-driven registration network for
aligning pre-procedural magnetic resonance (MR) and 3D intra-procedural
transrectal ultrasound (TRUS) images. A discriminator network is optimized to
distinguish the registration-predicted displacement fields from the motion data
simulated by finite element analysis. During training, the registration network
simultaneously aims to maximize similarity between anatomical labels that
drives image alignment and to minimize an adversarial generator loss that
measures divergence between the predicted- and simulated deformation. The
end-to-end trained network enables efficient and fully-automated registration
that only requires an MR and TRUS image pair as input, without anatomical
labels or simulated data during inference. 108 pairs of labelled MR and TRUS
images from 76 prostate cancer patients and 71,500 nonlinear finite-element
simulations from 143 different patients were used for this study. We show that,
with only gland segmentation as training labels, the proposed method can help
predict physically plausible deformation without any other smoothness penalty.
Based on cross-validation experiments using 834 pairs of independent validation
landmarks, the proposed adversarial-regularized registration achieved a target
registration error of 6.3 mm that is significantly lower than those from
several other regularization methods.Comment: Accepted to MICCAI 201
Cross-Task Representation Learning for Anatomical Landmark Detection
Recently, there is an increasing demand for automatically detecting
anatomical landmarks which provide rich structural information to facilitate
subsequent medical image analysis. Current methods related to this task often
leverage the power of deep neural networks, while a major challenge in fine
tuning such models in medical applications arises from insufficient number of
labeled samples. To address this, we propose to regularize the knowledge
transfer across source and target tasks through cross-task representation
learning. The proposed method is demonstrated for extracting facial anatomical
landmarks which facilitate the diagnosis of fetal alcohol syndrome. The source
and target tasks in this work are face recognition and landmark detection,
respectively. The main idea of the proposed method is to retain the feature
representations of the source model on the target task data, and to leverage
them as an additional source of supervisory signals for regularizing the target
model learning, thereby improving its performance under limited training
samples. Concretely, we present two approaches for the proposed representation
learning by constraining either final or intermediate model features on the
target model. Experimental results on a clinical face image dataset demonstrate
that the proposed approach works well with few labeled data, and outperforms
other compared approaches.Comment: MICCAI-MLMI 202
Deep Learning-Based Regression and Classification for Automatic Landmark Localization in Medical Images
In this study, we propose a fast and accurate method to automatically
localize anatomical landmarks in medical images. We employ a global-to-local
localization approach using fully convolutional neural networks (FCNNs). First,
a global FCNN localizes multiple landmarks through the analysis of image
patches, performing regression and classification simultaneously. In
regression, displacement vectors pointing from the center of image patches
towards landmark locations are determined. In classification, presence of
landmarks of interest in the patch is established. Global landmark locations
are obtained by averaging the predicted displacement vectors, where the
contribution of each displacement vector is weighted by the posterior
classification probability of the patch that it is pointing from. Subsequently,
for each landmark localized with global localization, local analysis is
performed. Specialized FCNNs refine the global landmark locations by analyzing
local sub-images in a similar manner, i.e. by performing regression and
classification simultaneously and combining the results. Evaluation was
performed through localization of 8 anatomical landmarks in CCTA scans, 2
landmarks in olfactory MR scans, and 19 landmarks in cephalometric X-rays. We
demonstrate that the method performs similarly to a second observer and is able
to localize landmarks in a diverse set of medical images, differing in image
modality, image dimensionality, and anatomical coverage.Comment: 12 pages, accepted at IEEE transactions in Medical Imagin
- …