69 research outputs found

    Autoadaptive motion modelling for MR-based respiratory motion estimation

    Get PDF
    © 2016 The Authors.Respiratory motion poses significant challenges in image-guided interventions. In emerging treatments such as MR-guided HIFU or MR-guided radiotherapy, it may cause significant misalignments between interventional road maps obtained pre-procedure and the anatomy during the treatment, and may affect intra-procedural imaging such as MR-thermometry. Patient specific respiratory motion models provide a solution to this problem. They establish a correspondence between the patient motion and simpler surrogate data which can be acquired easily during the treatment. Patient motion can then be estimated during the treatment by acquiring only the simpler surrogate data.In the majority of classical motion modelling approaches once the correspondence between the surrogate data and the patient motion is established it cannot be changed unless the model is recalibrated. However, breathing patterns are known to significantly change in the time frame of MR-guided interventions. Thus, the classical motion modelling approach may yield inaccurate motion estimations when the relation between the motion and the surrogate data changes over the duration of the treatment and frequent recalibration may not be feasible.We propose a novel methodology for motion modelling which has the ability to automatically adapt to new breathing patterns. This is achieved by choosing the surrogate data in such a way that it can be used to estimate the current motion in 3D as well as to update the motion model. In particular, in this work, we use 2D MR slices from different slice positions to build as well as to apply the motion model. We implemented such an autoadaptive motion model by extending our previous work on manifold alignment.We demonstrate a proof-of-principle of the proposed technique on cardiac gated data of the thorax and evaluate its adaptive behaviour on realistic synthetic data containing two breathing types generated from 6 volunteers, and real data from 4 volunteers. On synthetic data the autoadaptive motion model yielded 21.45% more accurate motion estimations compared to a non-adaptive motion model 10 min after a change in breathing pattern. On real data we demonstrated the methods ability to maintain motion estimation accuracy despite a drift in the respiratory baseline. Due to the cardiac gating of the imaging data, the method is currently limited to one update per heart beat and the calibration requires approximately 12 min of scanning. Furthermore, the method has a prediction latency of 800 ms. These limitations may be overcome in future work by altering the acquisition protocol

    Advances in Groupwise Image Registration

    Get PDF

    Advances in Groupwise Image Registration

    Get PDF

    Intrasubject multimodal groupwise registration with the conditional template entropy

    Get PDF
    Image registration is an important task in medical image analysis. Whereas most methods are designed for the registration of two images (pairwise registration), there is an increasing interest in simultaneously aligning more than two images using groupwise registration. Multimodal registration in a groupwise setting remains difficult, due to the lack of generally applicable similarity metrics. In this work, a novel similarity metric for such groupwise registration problems is proposed. The metric calculates the sum of the conditional entropy between each image in the group and a representative template image constructed iteratively using principal component analysis. The proposed metric is validated in extensive experiments on synthetic and intrasubject clinical image data. These experiments showed equivalent or improved registration accuracy compared to other state-of-the-art (dis)similarity metrics and improved transformation consistency compared to pairwise mutual information

    High-resolution self-gated dynamic abdominal MRI using manifold alignment

    Get PDF
    We present a novel retrospective self-gating method based on manifold alignment (MA), which enables reconstruction of free-breathing, high spatial and temporal resolution abdominal MRI sequences. Based on a radial golden-angle (RGA) acquisition trajectory, our method enables a multi-dimensional self-gating signal to be extracted from the k-space data for more accurate motion representation. The k-space radial profiles are evenly divided into a number of overlapping groups based on their radial angles. MA is then used to simultaneously learn and align the low dimensional manifolds of all groups, and embed them into a common manifold. In the manifold, k-space profiles that represent similar respiratory positions are close to each other. Image reconstruction is performed by combining radial profiles with evenly distributed angles that are close in the manifold. Our method was evaluated on both 2D and 3D synthetic and in vivo datasets. On the synthetic datasets, our method achieved high correlation with the ground truth in terms of image intensity and virtual navigator values. Using the in vivo data, compared to a state-of-the-art approach based on centre of k-space gating, our method was able to make use of much richer profile data for self-gating, resulting in statistically significantly better quantitative measurements in terms of organ sharpness and image gradient entropy

    A Spatiotemporal Volumetric Interpolation Network for 4D Dynamic Medical Image

    Full text link
    Dynamic medical imaging is usually limited in application due to the large radiation doses and longer image scanning and reconstruction times. Existing methods attempt to reduce the dynamic sequence by interpolating the volumes between the acquired image volumes. However, these methods are limited to either 2D images and/or are unable to support large variations in the motion between the image volume sequences. In this paper, we present a spatiotemporal volumetric interpolation network (SVIN) designed for 4D dynamic medical images. SVIN introduces dual networks: first is the spatiotemporal motion network that leverages the 3D convolutional neural network (CNN) for unsupervised parametric volumetric registration to derive spatiotemporal motion field from two-image volumes; the second is the sequential volumetric interpolation network, which uses the derived motion field to interpolate image volumes, together with a new regression-based module to characterize the periodic motion cycles in functional organ structures. We also introduce an adaptive multi-scale architecture to capture the volumetric large anatomy motions. Experimental results demonstrated that our SVIN outperformed state-of-the-art temporal medical interpolation methods and natural video interpolation methods that have been extended to support volumetric images. Our ablation study further exemplified that our motion network was able to better represent the large functional motion compared with the state-of-the-art unsupervised medical registration methods.Comment: 10 pages, 8 figures, Conference on Computer Vision and Pattern Recognition (CVPR) 202

    Coronary motion modelling for CTA to X-ray angiography registration

    Get PDF
    corecore