6,573 research outputs found
A Framework for Image Segmentation Using Shape Models and Kernel Space Shape Priors
©2008 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or distribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.DOI: 10.1109/TPAMI.2007.70774Segmentation involves separating an object from the background in a given image. The use of image information alone often leads to poor segmentation results due to the presence of noise, clutter or occlusion. The introduction of shape priors in the geometric active contour (GAC) framework has proved to be an effective way to ameliorate some of these problems. In this work, we propose a novel segmentation method combining image information with prior shape knowledge, using level-sets. Following the work of Leventon et al., we propose to revisit the use of PCA to introduce prior knowledge about shapes in a more robust manner. We utilize kernel PCA (KPCA) and show that this method outperforms linear PCA by allowing only those shapes that are close enough to the training data. In our segmentation framework, shape knowledge and image information are encoded into two energy functionals entirely described in terms of shapes. This consistent description permits to fully take advantage of the Kernel PCA methodology and leads to promising segmentation results. In particular, our shape-driven segmentation technique allows for the simultaneous encoding of multiple types of shapes, and offers a convincing level of robustness with respect to noise, occlusions, or smearing
Adversarial Deformation Regularization for Training Image Registration Neural Networks
We describe an adversarial learning approach to constrain convolutional
neural network training for image registration, replacing heuristic smoothness
measures of displacement fields often used in these tasks. Using
minimally-invasive prostate cancer intervention as an example application, we
demonstrate the feasibility of utilizing biomechanical simulations to
regularize a weakly-supervised anatomical-label-driven registration network for
aligning pre-procedural magnetic resonance (MR) and 3D intra-procedural
transrectal ultrasound (TRUS) images. A discriminator network is optimized to
distinguish the registration-predicted displacement fields from the motion data
simulated by finite element analysis. During training, the registration network
simultaneously aims to maximize similarity between anatomical labels that
drives image alignment and to minimize an adversarial generator loss that
measures divergence between the predicted- and simulated deformation. The
end-to-end trained network enables efficient and fully-automated registration
that only requires an MR and TRUS image pair as input, without anatomical
labels or simulated data during inference. 108 pairs of labelled MR and TRUS
images from 76 prostate cancer patients and 71,500 nonlinear finite-element
simulations from 143 different patients were used for this study. We show that,
with only gland segmentation as training labels, the proposed method can help
predict physically plausible deformation without any other smoothness penalty.
Based on cross-validation experiments using 834 pairs of independent validation
landmarks, the proposed adversarial-regularized registration achieved a target
registration error of 6.3 mm that is significantly lower than those from
several other regularization methods.Comment: Accepted to MICCAI 201
Mesh-to-raster based non-rigid registration of multi-modal images
Region of interest (ROI) alignment in medical images plays a crucial role in
diagnostics, procedure planning, treatment, and follow-up. Frequently, a model
is represented as triangulated mesh while the patient data is provided from CAT
scanners as pixel or voxel data. Previously, we presented a 2D method for
curve-to-pixel registration. This paper contributes (i) a general
mesh-to-raster (M2R) framework to register ROIs in multi-modal images; (ii) a
3D surface-to-voxel application, and (iii) a comprehensive quantitative
evaluation in 2D using ground truth provided by the simultaneous truth and
performance level estimation (STAPLE) method. The registration is formulated as
a minimization problem where the objective consists of a data term, which
involves the signed distance function of the ROI from the reference image, and
a higher order elastic regularizer for the deformation. The evaluation is based
on quantitative light-induced fluoroscopy (QLF) and digital photography (DP) of
decalcified teeth. STAPLE is computed on 150 image pairs from 32 subjects, each
showing one corresponding tooth in both modalities. The ROI in each image is
manually marked by three experts (900 curves in total). In the QLF-DP setting,
our approach significantly outperforms the mutual information-based
registration algorithm implemented with the Insight Segmentation and
Registration Toolkit (ITK) and Elastix
Affine Registration of label maps in Label Space
Two key aspects of coupled multi-object shape\ud
analysis and atlas generation are the choice of representation\ud
and subsequent registration methods used to align the sample\ud
set. For example, a typical brain image can be labeled into\ud
three structures: grey matter, white matter and cerebrospinal\ud
fluid. Many manipulations such as interpolation, transformation,\ud
smoothing, or registration need to be performed on these images\ud
before they can be used in further analysis. Current techniques\ud
for such analysis tend to trade off performance between the two\ud
tasks, performing well for one task but developing problems when\ud
used for the other.\ud
This article proposes to use a representation that is both\ud
flexible and well suited for both tasks. We propose to map object\ud
labels to vertices of a regular simplex, e.g. the unit interval for\ud
two labels, a triangle for three labels, a tetrahedron for four\ud
labels, etc. This representation, which is routinely used in fuzzy\ud
classification, is ideally suited for representing and registering\ud
multiple shapes. On closer examination, this representation\ud
reveals several desirable properties: algebraic operations may\ud
be done directly, label uncertainty is expressed as a weighted\ud
mixture of labels (probabilistic interpretation), interpolation is\ud
unbiased toward any label or the background, and registration\ud
may be performed directly.\ud
We demonstrate these properties by using label space in a gradient\ud
descent based registration scheme to obtain a probabilistic\ud
atlas. While straightforward, this iterative method is very slow,\ud
could get stuck in local minima, and depends heavily on the initial\ud
conditions. To address these issues, two fast methods are proposed\ud
which serve as coarse registration schemes following which the\ud
iterative descent method can be used to refine the results. Further,\ud
we derive an analytical formulation for direct computation of the\ud
"group mean" from the parameters of pairwise registration of all\ud
the images in the sample set. We show results on richly labeled\ud
2D and 3D data sets
- …