1,397 research outputs found
Nonrigid reconstruction of 3D breast surfaces with a low-cost RGBD camera for surgical planning and aesthetic evaluation
Accounting for 26% of all new cancer cases worldwide, breast cancer remains
the most common form of cancer in women. Although early breast cancer has a
favourable long-term prognosis, roughly a third of patients suffer from a
suboptimal aesthetic outcome despite breast conserving cancer treatment.
Clinical-quality 3D modelling of the breast surface therefore assumes an
increasingly important role in advancing treatment planning, prediction and
evaluation of breast cosmesis. Yet, existing 3D torso scanners are expensive
and either infrastructure-heavy or subject to motion artefacts. In this paper
we employ a single consumer-grade RGBD camera with an ICP-based registration
approach to jointly align all points from a sequence of depth images
non-rigidly. Subtle body deformation due to postural sway and respiration is
successfully mitigated leading to a higher geometric accuracy through
regularised locally affine transformations. We present results from 6 clinical
cases where our method compares well with the gold standard and outperforms a
previous approach. We show that our method produces better reconstructions
qualitatively by visual assessment and quantitatively by consistently obtaining
lower landmark error scores and yielding more accurate breast volume estimates
An Unsupervised Learning Model for Deformable Medical Image Registration
We present a fast learning-based algorithm for deformable, pairwise 3D
medical image registration. Current registration methods optimize an objective
function independently for each pair of images, which can be time-consuming for
large data. We define registration as a parametric function, and optimize its
parameters given a set of images from a collection of interest. Given a new
pair of scans, we can quickly compute a registration field by directly
evaluating the function using the learned parameters. We model this function
using a convolutional neural network (CNN), and use a spatial transform layer
to reconstruct one image from another while imposing smoothness constraints on
the registration field. The proposed method does not require supervised
information such as ground truth registration fields or anatomical landmarks.
We demonstrate registration accuracy comparable to state-of-the-art 3D image
registration, while operating orders of magnitude faster in practice. Our
method promises to significantly speed up medical image analysis and processing
pipelines, while facilitating novel directions in learning-based registration
and its applications. Our code is available at
https://github.com/balakg/voxelmorph .Comment: 9 pages, in CVPR 201
Recommended from our members
Fully automated convolutional neural network-based affine algorithm improves liver registration and lesion co-localization on hepatobiliary phase T1-weighted MR images.
BackgroundLiver alignment between series/exams is challenged by dynamic morphology or variability in patient positioning or motion. Image registration can improve image interpretation and lesion co-localization. We assessed the performance of a convolutional neural network algorithm to register cross-sectional liver imaging series and compared its performance to manual image registration.MethodsThree hundred fourteen patients, including internal and external datasets, who underwent gadoxetate disodium-enhanced magnetic resonance imaging for clinical care from 2011 to 2018, were retrospectively selected. Automated registration was applied to all 2,663 within-patient series pairs derived from these datasets. Additionally, 100 within-patient series pairs from the internal dataset were independently manually registered by expert readers. Liver overlap, image correlation, and intra-observation distances for manual versus automated registrations were compared using paired t tests. Influence of patient demographics, imaging characteristics, and liver uptake function was evaluated using univariate and multivariate mixed models.ResultsCompared to the manual, automated registration produced significantly lower intra-observation distance (p < 0.001) and higher liver overlap and image correlation (p < 0.001). Intra-exam automated registration achieved 0.88 mean liver overlap and 0.44 mean image correlation for the internal dataset and 0.91 and 0.41, respectively, for the external dataset. For inter-exam registration, mean overlap was 0.81 and image correlation 0.41. Older age, female sex, greater inter-series time interval, differing uptake, and greater voxel size differences independently reduced automated registration performance (p ≤ 0.020).ConclusionA fully automated algorithm accurately registered the liver within and between examinations, yielding better liver and focal observation co-localization compared to manual registration
Accelerated Nonrigid Intensity-Based Image Registration Using Importance Sampling
Nonrigid image registration methods using intensity-based similarity metrics are becoming increasingly common tools to estimate many types of deformations. Nonrigid warps can be very flexible with a large number of parameters and gradient optimization schemes are widely used to estimate them. However, for large datasets, the computation of the gradient of the similarity metric with respect to these many parameters becomes very time consuming. Using a small random subset of image voxels to approximate the gradient can reduce computation time. This work focuses on the use of importance sampling to reduce the variance of this gradient approximation. The proposed importance sampling framework is based on an edge-dependent adaptive sampling distribution designed for use with intensity-based registration algorithms. We compare the performance of registration based on stochastic approximations with and without importance sampling to that using deterministic gradient descent. Empirical results, on simulated magnetic resonance brain data and real computed tomography inhale-exhale lung data from eight subjects, show that a combination of stochastic approximation methods and importance sampling accelerates the registration process while preserving accuracy.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85955/1/Fessler13.pd
Compact Model Representation for 3D Reconstruction
3D reconstruction from 2D images is a central problem in computer vision.
Recent works have been focusing on reconstruction directly from a single image.
It is well known however that only one image cannot provide enough information
for such a reconstruction. A prior knowledge that has been entertained are 3D
CAD models due to its online ubiquity. A fundamental question is how to
compactly represent millions of CAD models while allowing generalization to new
unseen objects with fine-scaled geometry. We introduce an approach to compactly
represent a 3D mesh. Our method first selects a 3D model from a graph structure
by using a novel free-form deformation FFD 3D-2D registration, and then the
selected 3D model is refined to best fit the image silhouette. We perform a
comprehensive quantitative and qualitative analysis that demonstrates
impressive dense and realistic 3D reconstruction from single images.Comment: 9 pages, 6 figure
Recommended from our members
Validating Dose Uncertainty Estimates Produced by AUTODIRECT: An Automated Program to Evaluate Deformable Image Registration Accuracy.
Deformable image registration is a powerful tool for mapping information, such as radiation therapy dose calculations, from one computed tomography image to another. However, deformable image registration is susceptible to mapping errors. Recently, an automated deformable image registration evaluation of confidence tool was proposed to predict voxel-specific deformable image registration dose mapping errors on a patient-by-patient basis. The purpose of this work is to conduct an extensive analysis of automated deformable image registration evaluation of confidence tool to show its effectiveness in estimating dose mapping errors. The proposed format of automated deformable image registration evaluation of confidence tool utilizes 4 simulated patient deformations (3 B-spline-based deformations and 1 rigid transformation) to predict the uncertainty in a deformable image registration algorithm's performance. This workflow is validated for 2 DIR algorithms (B-spline multipass from Velocity and Plastimatch) with 1 physical and 11 virtual phantoms, which have known ground-truth deformations, and with 3 pairs of real patient lung images, which have several hundred identified landmarks. The true dose mapping error distributions closely followed the Student t distributions predicted by automated deformable image registration evaluation of confidence tool for the validation tests: on average, the automated deformable image registration evaluation of confidence tool-produced confidence levels of 50%, 68%, and 95% contained 48.8%, 66.3%, and 93.8% and 50.1%, 67.6%, and 93.8% of the actual errors from Velocity and Plastimatch, respectively. Despite the sparsity of landmark points, the observed error distribution from the 3 lung patient data sets also followed the expected error distribution. The dose error distributions from automated deformable image registration evaluation of confidence tool also demonstrate good resemblance to the true dose error distributions. Automated deformable image registration evaluation of confidence tool was also found to produce accurate confidence intervals for the dose-volume histograms of the deformed dose
- …