293 research outputs found

    High-resolution self-gated dynamic abdominal MRI using manifold alignment

    Get PDF
    We present a novel retrospective self-gating method based on manifold alignment (MA), which enables reconstruction of free-breathing, high spatial and temporal resolution abdominal MRI sequences. Based on a radial golden-angle (RGA) acquisition trajectory, our method enables a multi-dimensional self-gating signal to be extracted from the k-space data for more accurate motion representation. The k-space radial profiles are evenly divided into a number of overlapping groups based on their radial angles. MA is then used to simultaneously learn and align the low dimensional manifolds of all groups, and embed them into a common manifold. In the manifold, k-space profiles that represent similar respiratory positions are close to each other. Image reconstruction is performed by combining radial profiles with evenly distributed angles that are close in the manifold. Our method was evaluated on both 2D and 3D synthetic and in vivo datasets. On the synthetic datasets, our method achieved high correlation with the ground truth in terms of image intensity and virtual navigator values. Using the in vivo data, compared to a state-of-the-art approach based on centre of k-space gating, our method was able to make use of much richer profile data for self-gating, resulting in statistically significantly better quantitative measurements in terms of organ sharpness and image gradient entropy

    Temporal Interpolation via Motion Field Prediction

    Full text link
    Navigated 2D multi-slice dynamic Magnetic Resonance (MR) imaging enables high contrast 4D MR imaging during free breathing and provides in-vivo observations for treatment planning and guidance. Navigator slices are vital for retrospective stacking of 2D data slices in this method. However, they also prolong the acquisition sessions. Temporal interpolation of navigator slices an be used to reduce the number of navigator acquisitions without degrading specificity in stacking. In this work, we propose a convolutional neural network (CNN) based method for temporal interpolation via motion field prediction. The proposed formulation incorporates the prior knowledge that a motion field underlies changes in the image intensities over time. Previous approaches that interpolate directly in the intensity space are prone to produce blurry images or even remove structures in the images. Our method avoids such problems and faithfully preserves the information in the image. Further, an important advantage of our formulation is that it provides an unsupervised estimation of bi-directional motion fields. We show that these motion fields can be used to halve the number of registrations required during 4D reconstruction, thus substantially reducing the reconstruction time.Comment: Submitted to 1st Conference on Medical Imaging with Deep Learning (MIDL 2018), Amsterdam, The Netherland

    Autoadaptive motion modelling for MR-based respiratory motion estimation

    Get PDF
    © 2016 The Authors.Respiratory motion poses significant challenges in image-guided interventions. In emerging treatments such as MR-guided HIFU or MR-guided radiotherapy, it may cause significant misalignments between interventional road maps obtained pre-procedure and the anatomy during the treatment, and may affect intra-procedural imaging such as MR-thermometry. Patient specific respiratory motion models provide a solution to this problem. They establish a correspondence between the patient motion and simpler surrogate data which can be acquired easily during the treatment. Patient motion can then be estimated during the treatment by acquiring only the simpler surrogate data.In the majority of classical motion modelling approaches once the correspondence between the surrogate data and the patient motion is established it cannot be changed unless the model is recalibrated. However, breathing patterns are known to significantly change in the time frame of MR-guided interventions. Thus, the classical motion modelling approach may yield inaccurate motion estimations when the relation between the motion and the surrogate data changes over the duration of the treatment and frequent recalibration may not be feasible.We propose a novel methodology for motion modelling which has the ability to automatically adapt to new breathing patterns. This is achieved by choosing the surrogate data in such a way that it can be used to estimate the current motion in 3D as well as to update the motion model. In particular, in this work, we use 2D MR slices from different slice positions to build as well as to apply the motion model. We implemented such an autoadaptive motion model by extending our previous work on manifold alignment.We demonstrate a proof-of-principle of the proposed technique on cardiac gated data of the thorax and evaluate its adaptive behaviour on realistic synthetic data containing two breathing types generated from 6 volunteers, and real data from 4 volunteers. On synthetic data the autoadaptive motion model yielded 21.45% more accurate motion estimations compared to a non-adaptive motion model 10 min after a change in breathing pattern. On real data we demonstrated the methods ability to maintain motion estimation accuracy despite a drift in the respiratory baseline. Due to the cardiac gating of the imaging data, the method is currently limited to one update per heart beat and the calibration requires approximately 12 min of scanning. Furthermore, the method has a prediction latency of 800 ms. These limitations may be overcome in future work by altering the acquisition protocol

    Surrogate-driven respiratory motion models for MRI-guided lung radiotherapy treatments

    Get PDF
    An MR-Linac integrates an MR scanner with a radiotherapy delivery system, providing non-ionizing real-time imaging of the internal anatomy before, during and after radiotherapy treatments. Due to spatio-temporal limitations of MR imaging, only high-resolution 2D cine-MR images can be acquired in real-time during MRI-guided radiotherapy (MRIgRT) to monitor the respiratory-induced motion of lung tumours and organs-at-risk. However, temporally-resolved 3D anatomical information is essential for accurate MR guidance of beam delivery and dose estimation of the actually delivered dose. Surrogate-driven respiratory motion models can estimate the 3D motion of the internal anatomy from surrogate signals, producing the required information. The overall aim of this thesis was to tailor a generalized respiratory motion modelling framework for lung MRIgRT. This framework can fit the model directly to unsorted 2D MR images sampling the 3D motion, and to surrogate signals extracted from the 2D cine-MR images acquired on an MR-Linac. It can model breath-to-breath variability and produce a motion compensated super-resolution reconstruction (MCSR) 3D image that can be deformed using the estimated motion. In this work novel MRI-derived surrogate signals were generated from 2D cine-MR images to model respiratory motion for lung cancer patients, by applying principal component analysis to the control point displacements obtained from the registration of the cine-MR images. An MR multi-slice interleaved acquisition potentially suitable for the MR-Linac was developed to generate MRI-derived surrogate signals and build accurate respiratory motion models with the generalized framework for lung cancer patients. The developed models and the MCSR images were thoroughly evaluated for lung cancer patients scanned on an MR-Linac. The results showed that respiratory motion models built with the generalized framework and minimal training data generally produced median errors within the MCSR voxel size of 2 mm, throughout the whole 3D thoracic field-of-view and over the expected lung MRIgRT treatment times

    Integration of Spatial Distortion Effects in a 4D Computational Phantom for Simulation Studies in Extra-Cranial MRI-guided Radiation Therapy: Initial Results.

    Get PDF
    PurposeSpatial distortions in magnetic resonance imaging (MRI) are mainly caused by inhomogeneities of the static magnetic field, nonlinearities in the applied gradients, and tissue‐specific magnetic susceptibility variations. These factors may significantly alter the geometrical accuracy of the reconstructed MR image, thus questioning the reliability of MRI for guidance in image‐guided radiation therapy. In this work, we quantified MRI spatial distortions and created a quantitative model where different sources of distortions can be separated. The generated model was then integrated into a four‐dimensional (4D) computational phantom for simulation studies in MRI‐guided radiation therapy at extra‐cranial sites.MethodsA geometrical spatial distortion phantom was designed in four modules embedding laser‐cut PMMA grids, providing 3520 landmarks in a field of view of (345 × 260 × 480) mm3. The construction accuracy of the phantom was verified experimentally. Two fast MRI sequences for extra‐cranial imaging at 1.5 T were investigated, considering axial slices acquired with online distortion correction, in order to mimic practical use in MRI‐guided radiotherapy. Distortions were separated into their sources by acquisition of images with gradient polarity reversal and dedicated susceptibility calculations. Such a separation yielded a quantitative spatial distortion model to be used for MR imaging simulations. Finally, the obtained spatial distortion model was embedded into an anthropomorphic 4D computational phantom, providing registered virtual CT/MR images where spatial distortions in MRI acquisition can be simulated.ResultsThe manufacturing accuracy of the geometrical distortion phantom was quantified to be within 0.2 mm in the grid planes and 0.5 mm in depth, including thickness variations and bending effects of individual grids. Residual spatial distortions after MRI distortion correction were strongly influenced by the applied correction mode, with larger effects in the trans‐axial direction. In the axial plane, gradient nonlinearities caused the main distortions, with values up to 3 mm in a 1.5 T magnet, whereas static field and susceptibility effects were below 1 mm. The integration in the 4D anthropomorphic computational phantom highlighted that deformations can be severe in the region of the thoracic diaphragm, especially when using axial imaging with 2D distortion correction. Adaptation of the phantom based on patient‐specific measurements was also verified, aiming at increased realism in the simulation.ConclusionsThe implemented framework provides an integrated approach for MRI spatial distortion modeling, where different sources of distortion can be quantified in time‐dependent geometries. The computational phantom represents a valuable platform to study motion management strategies in extra‐cranial MRI‐guided radiotherapy, where the effects of spatial distortions can be modeled on synthetic images in a virtual environment

    Advances in Groupwise Image Registration

    Get PDF

    Advances in Groupwise Image Registration

    Get PDF
    corecore