222 research outputs found

    Adversarial Deformation Regularization for Training Image Registration Neural Networks

    Get PDF
    We describe an adversarial learning approach to constrain convolutional neural network training for image registration, replacing heuristic smoothness measures of displacement fields often used in these tasks. Using minimally-invasive prostate cancer intervention as an example application, we demonstrate the feasibility of utilizing biomechanical simulations to regularize a weakly-supervised anatomical-label-driven registration network for aligning pre-procedural magnetic resonance (MR) and 3D intra-procedural transrectal ultrasound (TRUS) images. A discriminator network is optimized to distinguish the registration-predicted displacement fields from the motion data simulated by finite element analysis. During training, the registration network simultaneously aims to maximize similarity between anatomical labels that drives image alignment and to minimize an adversarial generator loss that measures divergence between the predicted- and simulated deformation. The end-to-end trained network enables efficient and fully-automated registration that only requires an MR and TRUS image pair as input, without anatomical labels or simulated data during inference. 108 pairs of labelled MR and TRUS images from 76 prostate cancer patients and 71,500 nonlinear finite-element simulations from 143 different patients were used for this study. We show that, with only gland segmentation as training labels, the proposed method can help predict physically plausible deformation without any other smoothness penalty. Based on cross-validation experiments using 834 pairs of independent validation landmarks, the proposed adversarial-regularized registration achieved a target registration error of 6.3 mm that is significantly lower than those from several other regularization methods.Comment: Accepted to MICCAI 201

    Comparing Regularized Kelvinlet Functions and the Finite Element Method for Registration of Medical Images to Sparse Organ Data

    Full text link
    Image-guided surgery collocates patient-specific data with the physical environment to facilitate surgical decision making in real-time. Unfortunately, these guidance systems commonly become compromised by intraoperative soft-tissue deformations. Nonrigid image-to-physical registration methods have been proposed to compensate for these deformations, but intraoperative clinical utility requires compatibility of these techniques with data sparsity and temporal constraints in the operating room. While linear elastic finite element models are effective in sparse data scenarios, the computation time for finite element simulation remains a limitation to widespread deployment. This paper proposes a registration algorithm that uses regularized Kelvinlets, which are analytical solutions to linear elasticity in an infinite domain, to overcome these barriers. This algorithm is demonstrated and compared to finite element-based registration on two datasets: a phantom dataset representing liver deformations and an in vivo dataset representing breast deformations. The regularized Kelvinlets algorithm resulted in a significant reduction in computation time compared to the finite element method. Accuracy as evaluated by target registration error was comparable between both methods. Average target registration errors were 4.6 +/- 1.0 and 3.2 +/- 0.8 mm on the liver dataset and 5.4 +/- 1.4 and 6.4 +/- 1.5 mm on the breast dataset for the regularized Kelvinlets and finite element method models, respectively. This work demonstrates the generalizability of using a regularized Kelvinlets registration algorithm on multiple soft tissue elastic organs. This method may improve and accelerate registration for image-guided surgery applications, and it shows the potential of using regularized Kelvinlets solutions on medical imaging data.Comment: 17 pages, 9 figure

    Atlas-based Transfer of Boundary Conditions for Biomechanical Simulation

    Get PDF
    International audienceAn environment composed of different types of living tissues (such as the abdominal cavity) reveals a high complexity of boundary conditions, which are the attachments (e.g. connective tissues, ligaments) connecting different anatomical structures. Together with the material properties, the boundary conditions have a significant influence on the mechanical response of the organs, however corresponding correct me- chanical modeling remains a challenging task, as the connective struc- tures are difficult to identify in certain standard imaging modalities. In this paper, we present a method for automatic modeling of boundary con- ditions in deformable anatomical structures, which is an important step in patient-specific biomechanical simulations. The method is based on a statistical atlas which gathers data defining the connective structures at- tached to the organ of interest. In order to transfer the information stored in the atlas to a specific patient, the atlas is registered to the patient data using a physics-based technique and the resulting boundary conditions are defined according to the mean position and variance available in the atlas. The method is evaluated using abdominal scans of ten patients. The results show that the atlas provides a sufficient information about the boundary conditions which can be reliably transferred to a specific patient. The boundary conditions obtained by the atlas-based transfer show a good match both with actual segmented boundary conditions and in terms of mechanical response of deformable organs

    Learning correspondences of cardiac motion from images using biomechanics-informed modeling

    Full text link
    Learning spatial-temporal correspondences in cardiac motion from images is important for understanding the underlying dynamics of cardiac anatomical structures. Many methods explicitly impose smoothness constraints such as the L2\mathcal{L}_2 norm on the displacement vector field (DVF), while usually ignoring biomechanical feasibility in the transformation. Other geometric constraints either regularize specific regions of interest such as imposing incompressibility on the myocardium or introduce additional steps such as training a separate network-based regularizer on physically simulated datasets. In this work, we propose an explicit biomechanics-informed prior as regularization on the predicted DVF in modeling a more generic biomechanically plausible transformation within all cardiac structures without introducing additional training complexity. We validate our methods on two publicly available datasets in the context of 2D MRI data and perform extensive experiments to illustrate the effectiveness and robustness of our proposed methods compared to other competing regularization schemes. Our proposed methods better preserve biomechanical properties by visual assessment and show advantages in segmentation performance using quantitative evaluation metrics. The code is publicly available at \url{https://github.com/Voldemort108X/bioinformed_reg}.Comment: Accepted by MICCAI-STACOM 2022 as an oral presentatio

    Biomechanics-informed Neural Networks for Myocardial Motion Tracking in MRI

    Get PDF
    Image registration is an ill-posed inverse problem which often requires regularisation on the solution space. In contrast to most of the current approaches which impose explicit regularisation terms such as smoothness, in this paper we propose a novel method that can implicitly learn biomechanics-informed regularisation. Such an approach can incorporate application-specific prior knowledge into deep learning based registration. Particularly, the proposed biomechanics-informed regularisation leverages a variational autoencoder (VAE) to learn a manifold for biomechanically plausible deformations and to implicitly capture their underlying properties via reconstructing biomechanical simulations. The learnt VAE regulariser then can be coupled with any deep learning based registration network to regularise the solution space to be biomechanically plausible. The proposed method is validated in the context of myocardial motion tracking on 2D stacks of cardiac MRI data from two different datasets. The results show that it can achieve better performance against other competing methods in terms of motion tracking accuracy and has the ability to learn biomechanical properties such as incompressibility and strains. The method has also been shown to have better generalisability to unseen domains compared with commonly used L2 regularisation schemes.Comment: The paper is early accepted by MICCAI 202

    A comparative evaluation of 3 different free-form deformable image registration and contour propagation methods for head and neck MRI : the case of parotid changes radiotherapy

    Get PDF
    Purpose: To validate and compare the deformable image registration and parotid contour propagation process for head and neck magnetic resonance imaging in patients treated with radiotherapy using 3 different approachesthe commercial MIM, the open-source Elastix software, and an optimized version of it. Materials and Methods: Twelve patients with head and neck cancer previously treated with radiotherapy were considered. Deformable image registration and parotid contour propagation were evaluated by considering the magnetic resonance images acquired before and after the end of the treatment. Deformable image registration, based on free-form deformation method, and contour propagation available on MIM were compared to Elastix. Two different contour propagation approaches were implemented for Elastix software, a conventional one (DIR_Trx) and an optimized homemade version, based on mesh deformation (DIR_Mesh). The accuracy of these 3 approaches was estimated by comparing propagated to manual contours in terms of average symmetric distance, maximum symmetric distance, Dice similarity coefficient, sensitivity, and inclusiveness. Results: A good agreement was generally found between the manual contours and the propagated ones, without differences among the 3 methods; in few critical cases with complex deformations, DIR_Mesh proved to be more accurate, having the lowest values of average symmetric distance and maximum symmetric distance and the highest value of Dice similarity coefficient, although nonsignificant. The average propagation errors with respect to the reference contours are lower than the voxel diagonal (2 mm), and Dice similarity coefficient is around 0.8 for all 3 methods. Conclusion: The 3 free-form deformation approaches were not significantly different in terms of deformable image registration accuracy and can be safely adopted for the registration and parotid contour propagation during radiotherapy on magnetic resonance imaging. More optimized approaches (as DIR_Mesh) could be preferable for critical deformations

    Medical Image Registration Using Deep Neural Networks

    Get PDF
    Registration is a fundamental problem in medical image analysis wherein images are transformed spatially to align corresponding anatomical structures in each image. Recently, the development of learning-based methods, which exploit deep neural networks and can outperform classical iterative methods, has received considerable interest from the research community. This interest is due in part to the substantially reduced computational requirements that learning-based methods have during inference, which makes them particularly well-suited to real-time registration applications. Despite these successes, learning-based methods can perform poorly when applied to images from different modalities where intensity characteristics can vary greatly, such as in magnetic resonance and ultrasound imaging. Moreover, registration performance is often demonstrated on well-curated datasets, closely matching the distribution of the training data. This makes it difficult to determine whether demonstrated performance accurately represents the generalization and robustness required for clinical use. This thesis presents learning-based methods which address the aforementioned difficulties by utilizing intuitive point-set-based representations, user interaction and meta-learning-based training strategies. Primarily, this is demonstrated with a focus on the non-rigid registration of 3D magnetic resonance imaging to sparse 2D transrectal ultrasound images to assist in the delivery of targeted prostate biopsies. While conventional systematic prostate biopsy methods can require many samples to be taken to confidently produce a diagnosis, tumor-targeted approaches have shown improved patient, diagnostic, and disease management outcomes with fewer samples. However, the available intraoperative transrectal ultrasound imaging alone is insufficient for accurate targeted guidance. As such, this exemplar application is used to illustrate the effectiveness of sparse, interactively-acquired ultrasound imaging for real-time, interventional registration. The presented methods are found to improve registration accuracy, relative to state-of-the-art, with substantially lower computation time and require a fraction of the data at inference. As a result, these methods are particularly attractive given their potential for real-time registration in interventional applications

    Inverse-Consistent Determination of Young\u27s Modulus of Human Lung

    Get PDF
    Human lung undergoes respiration-induced deformation due to sequential inhalation and exhalation. Accurate determination of lung deformation is crucial for tumor localization and targeted radiotherapy in patients with lung cancer. Numerical modeling of human lung dynamics based on underlying physics and physiology enables simulation and virtual visualization of lung deformation. Dynamical modeling is numerically complicated by the lack of information on lung elastic behavior, structural heterogeneity as well as boundary constrains. This study integrates physics-based modeling and image-based data acquisition to develop the patient-specific biomechanical model and consequently establish the first consistent Young\u27s modulus (YM) of human lung. This dissertation has four major components: (i) develop biomechanical model for computation of the flow and deformation characteristics that can utilize subject-specific, spatially-dependent lung material property; (ii) develop a fusion algorithm to integrate deformation results from a deformable image registration (DIR) and physics-based modeling using the theory of Tikhonov regularization; (iii) utilize fusion algorithm to establish unique and consistent patient specific Young\u27s modulus and; (iv) validate biomechanical model utilizing established patient-specific elastic property with imaging data. The simulation is performed on three dimensional lung geometry reconstructed from four-dimensional computed tomography (4DCT) dataset of human subjects. The heterogeneous Young\u27s modulus is estimated from a linear elastic deformation model with the same lung geometry and 4D lung DIR. The biomechanical model adequately predicts the spatio-temporal lung deformation, consistent with data obtained from imaging. The accuracy of the numerical solution is enhanced through fusion with the imaging data beyond the classical comparison of the two sets of data. Finally, the fused displacement results are used to establish unique and consistent patient-specific elastic property of the lung

    MR to Ultrasound Registration for Image-Guided Prostate Biopsy

    Get PDF
    Transrectal ultrasound (TRUS) guided prostate biopsy is the standard approach for diagnosis of prostate cancer (PCa). However, due to the lack of image contrast of prostate tumors, it often results in false negatives. Magnetic Resonance Imaging (MRI) has been considered to be a promising imaging modality for noninvasive identification of PCa, since it can provide a high sensitivity and specificity for the detection of early stage PCa. Our main objective is to develop a registration method of 3D MR-TRUS images, allowing generation of volumetric 3D maps of targets identified in 3D MR images to be biopsied using 3D TRUS images. We proposed an image-based non-rigid registration approach which employs the modality independent neighborhood descriptor (MIND) as the local similarity feature. An efficient duality-based convex optimization-based algorithmic scheme was introduced to extract the deformations. The registration accuracy was evaluated using 20 patient images by calculating the target registration error (TRE) using manually identified corresponding intrinsic fiducials. Additional performance metrics (DSC, MAD, and MAXD) were also calculated by comparing the MR and TRUS manually segmented prostate surfaces in the registered images. Experimental results showed that the proposed method yielded an overall median TRE of 1.76 mm. In addition, we proposed a surface-based registration method, which first makes use of an initial rigid registration of 3D MR to TRUS using 6 manually placed corresponding landmarks in each image. Following the manual initialization, two prostate surfaces are segmented from 3D MR and TRUS images and then non-rigidly registered using a thin-plate spline algorithm. The registration accuracy was evaluated using 17 patient images by measuring TRE. Experimental results show that the proposed method yielded an overall mean TRE of 2.24 mm, which is favorably comparable to a clinical requirement for an error of less than 2.5 mm
    corecore