43,035 research outputs found
An Unsupervised Learning Model for Deformable Medical Image Registration
We present a fast learning-based algorithm for deformable, pairwise 3D
medical image registration. Current registration methods optimize an objective
function independently for each pair of images, which can be time-consuming for
large data. We define registration as a parametric function, and optimize its
parameters given a set of images from a collection of interest. Given a new
pair of scans, we can quickly compute a registration field by directly
evaluating the function using the learned parameters. We model this function
using a convolutional neural network (CNN), and use a spatial transform layer
to reconstruct one image from another while imposing smoothness constraints on
the registration field. The proposed method does not require supervised
information such as ground truth registration fields or anatomical landmarks.
We demonstrate registration accuracy comparable to state-of-the-art 3D image
registration, while operating orders of magnitude faster in practice. Our
method promises to significantly speed up medical image analysis and processing
pipelines, while facilitating novel directions in learning-based registration
and its applications. Our code is available at
https://github.com/balakg/voxelmorph .Comment: 9 pages, in CVPR 201
Prior-based Coregistration and Cosegmentation
We propose a modular and scalable framework for dense coregistration and
cosegmentation with two key characteristics: first, we substitute ground truth
data with the semantic map output of a classifier; second, we combine this
output with population deformable registration to improve both alignment and
segmentation. Our approach deforms all volumes towards consensus, taking into
account image similarities and label consistency. Our pipeline can incorporate
any classifier and similarity metric. Results on two datasets, containing
annotations of challenging brain structures, demonstrate the potential of our
method.Comment: The first two authors contributed equall
EchoFusion: Tracking and Reconstruction of Objects in 4D Freehand Ultrasound Imaging without External Trackers
Ultrasound (US) is the most widely used fetal imaging technique. However, US
images have limited capture range, and suffer from view dependent artefacts
such as acoustic shadows. Compounding of overlapping 3D US acquisitions into a
high-resolution volume can extend the field of view and remove image artefacts,
which is useful for retrospective analysis including population based studies.
However, such volume reconstructions require information about relative
transformations between probe positions from which the individual volumes were
acquired. In prenatal US scans, the fetus can move independently from the
mother, making external trackers such as electromagnetic or optical tracking
unable to track the motion between probe position and the moving fetus. We
provide a novel methodology for image-based tracking and volume reconstruction
by combining recent advances in deep learning and simultaneous localisation and
mapping (SLAM). Tracking semantics are established through the use of a
Residual 3D U-Net and the output is fed to the SLAM algorithm. As a proof of
concept, experiments are conducted on US volumes taken from a whole body fetal
phantom, and from the heads of real fetuses. For the fetal head segmentation,
we also introduce a novel weak annotation approach to minimise the required
manual effort for ground truth annotation. We evaluate our method
qualitatively, and quantitatively with respect to tissue discrimination
accuracy and tracking robustness.Comment: MICCAI Workshop on Perinatal, Preterm and Paediatric Image analysis
(PIPPI), 201
Deformable Registration through Learning of Context-Specific Metric Aggregation
We propose a novel weakly supervised discriminative algorithm for learning
context specific registration metrics as a linear combination of conventional
similarity measures. Conventional metrics have been extensively used over the
past two decades and therefore both their strengths and limitations are known.
The challenge is to find the optimal relative weighting (or parameters) of
different metrics forming the similarity measure of the registration algorithm.
Hand-tuning these parameters would result in sub optimal solutions and quickly
become infeasible as the number of metrics increases. Furthermore, such
hand-crafted combination can only happen at global scale (entire volume) and
therefore will not be able to account for the different tissue properties. We
propose a learning algorithm for estimating these parameters locally,
conditioned to the data semantic classes. The objective function of our
formulation is a special case of non-convex function, difference of convex
function, which we optimize using the concave convex procedure. As a proof of
concept, we show the impact of our approach on three challenging datasets for
different anatomical structures and modalities.Comment: Accepted for publication in the 8th International Workshop on Machine
Learning in Medical Imaging (MLMI 2017), in conjunction with MICCAI 201
Numerical methods for coupled reconstruction and registration in digital breast tomosynthesis.
Digital Breast Tomosynthesis (DBT) provides an insight into the fine details of normal fibroglandular tissues and abnormal lesions by reconstructing a pseudo-3D image of the breast. In this respect, DBT overcomes a major limitation of conventional X-ray mam- mography by reducing the confounding effects caused by the superposition of breast tissue. In a breast cancer screening or diagnostic context, a radiologist is interested in detecting change, which might be indicative of malignant disease. To help automate this task image registration is required to establish spatial correspondence between time points. Typically, images, such as MRI or CT, are first reconstructed and then registered. This approach can be effective if reconstructing using a complete set of data. However, for ill-posed, limited-angle problems such as DBT, estimating the deformation is com- plicated by the significant artefacts associated with the reconstruction, leading to severe inaccuracies in the registration. This paper presents a mathematical framework, which couples the two tasks and jointly estimates both image intensities and the parameters of a transformation. Under this framework, we compare an iterative method and a simultaneous method, both of which tackle the problem of comparing DBT data by combining reconstruction of a pair of temporal volumes with their registration. We evaluate our methods using various computational digital phantoms, uncom- pressed breast MR images, and in-vivo DBT simulations. Firstly, we compare both iter- ative and simultaneous methods to the conventional, sequential method using an affine transformation model. We show that jointly estimating image intensities and parametric transformations gives superior results with respect to reconstruction fidelity and regis- tration accuracy. Also, we incorporate a non-rigid B-spline transformation model into our simultaneous method. The results demonstrate a visually plausible recovery of the deformation with preservation of the reconstruction fidelity
A Novel Deep Learning Framework for Internal Gross Target Volume Definition from 4D Computed Tomography of Lung Cancer Patients
In this paper, we study the reliability of a novel deep learning framework for internal gross target volume (IGTV) delineation from four-dimensional computed tomography (4DCT), which is applied to patients with lung cancer treated by Stereotactic Body Radiation Therapy (SBRT). 77 patients who underwent SBRT followed by 4DCT scans were incorporated in a retrospective study. The IGTV_DL was delineated using a novel deep machine learning algorithm with a linear exhaustive optimal combination framework, for the purpose of comparison, three other IGTVs base on common methods was also delineated, we compared the relative volume difference (RVI), matching index (MI) and encompassment index (EI) for the above IGTVs. Then, multiple parameter regression analysis assesses the tumor volume and motion range as clinical influencing factors in the MI variation. Experimental results demonstrated that the deep learning algorithm with linear exhaustive optimal combination framework has a higher probability of achieving optimal MI compared with other currently widely used methods. For patients after simple breathing training by keeping the respiratory frequency in 10 BMP, the four phase combinations of 0%, 30%, 50% and 90% can be considered as a potential candidate for an optimal combination to synthesis IGTV in all respiration amplitudes
- …