800 research outputs found

    Adversarial Deformation Regularization for Training Image Registration Neural Networks

    Get PDF
    We describe an adversarial learning approach to constrain convolutional neural network training for image registration, replacing heuristic smoothness measures of displacement fields often used in these tasks. Using minimally-invasive prostate cancer intervention as an example application, we demonstrate the feasibility of utilizing biomechanical simulations to regularize a weakly-supervised anatomical-label-driven registration network for aligning pre-procedural magnetic resonance (MR) and 3D intra-procedural transrectal ultrasound (TRUS) images. A discriminator network is optimized to distinguish the registration-predicted displacement fields from the motion data simulated by finite element analysis. During training, the registration network simultaneously aims to maximize similarity between anatomical labels that drives image alignment and to minimize an adversarial generator loss that measures divergence between the predicted- and simulated deformation. The end-to-end trained network enables efficient and fully-automated registration that only requires an MR and TRUS image pair as input, without anatomical labels or simulated data during inference. 108 pairs of labelled MR and TRUS images from 76 prostate cancer patients and 71,500 nonlinear finite-element simulations from 143 different patients were used for this study. We show that, with only gland segmentation as training labels, the proposed method can help predict physically plausible deformation without any other smoothness penalty. Based on cross-validation experiments using 834 pairs of independent validation landmarks, the proposed adversarial-regularized registration achieved a target registration error of 6.3 mm that is significantly lower than those from several other regularization methods.Comment: Accepted to MICCAI 201

    Autonomous Tissue Scanning under Free-Form Motion for Intraoperative Tissue Characterisation

    Full text link
    In Minimally Invasive Surgery (MIS), tissue scanning with imaging probes is required for subsurface visualisation to characterise the state of the tissue. However, scanning of large tissue surfaces in the presence of deformation is a challenging task for the surgeon. Recently, robot-assisted local tissue scanning has been investigated for motion stabilisation of imaging probes to facilitate the capturing of good quality images and reduce the surgeon's cognitive load. Nonetheless, these approaches require the tissue surface to be static or deform with periodic motion. To eliminate these assumptions, we propose a visual servoing framework for autonomous tissue scanning, able to deal with free-form tissue deformation. The 3D structure of the surgical scene is recovered and a feature-based method is proposed to estimate the motion of the tissue in real-time. A desired scanning trajectory is manually defined on a reference frame and continuously updated using projective geometry to follow the tissue motion and control the movement of the robotic arm. The advantage of the proposed method is that it does not require the learning of the tissue motion prior to scanning and can deal with free-form deformation. We deployed this framework on the da Vinci surgical robot using the da Vinci Research Kit (dVRK) for Ultrasound tissue scanning. Since the framework does not rely on information from the Ultrasound data, it can be easily extended to other probe-based imaging modalities.Comment: 7 pages, 5 figures, ICRA 202

    SearchMorph:Multi-scale Correlation Iterative Network for Deformable Registration

    Full text link
    Deformable image registration can obtain dynamic information about images, which is of great significance in medical image analysis. The unsupervised deep learning registration method can quickly achieve high registration accuracy without labels. However, these methods generally suffer from uncorrelated features, poor ability to register large deformations and details, and unnatural deformation fields. To address the issues above, we propose an unsupervised multi-scale correlation iterative registration network (SearchMorph). In the proposed network, we introduce a correlation layer to strengthen the relevance between features and construct a correlation pyramid to provide multi-scale relevance information for the network. We also design a deformation field iterator, which improves the ability of the model to register details and large deformations through the search module and GRU while ensuring that the deformation field is realistic. We use single-temporal brain MR images and multi-temporal echocardiographic sequences to evaluate the model's ability to register large deformations and details. The experimental results demonstrate that the method in this paper achieves the highest registration accuracy and the lowest folding point ratio using a short elapsed time to state-of-the-art

    Meta-Learning Initializations for Interactive Medical Image Registration

    Get PDF
    We present a meta-learning framework for interactive medical image registration. Our proposed framework comprises three components: a learning-based medical image registration algorithm, a form of user interaction that refines registration at inference, and a meta-learning protocol that learns a rapidly adaptable network initialization. This paper describes a specific algorithm that implements the registration, interaction and meta-learning protocol for our exemplar clinical application: registration of magnetic resonance (MR) imaging to interactively acquired, sparsely-sampled transrectal ultrasound (TRUS) images. Our approach obtains comparable registration error (4.26 mm) to the best-performing non-interactive learning-based 3D-to-3D method (3.97 mm) while requiring only a fraction of the data, and occurring in real-time during acquisition. Applying sparsely sampled data to non-interactive methods yields higher registration errors (6.26 mm), demonstrating the effectiveness of interactive MR-TRUS registration, which may be applied intraoperatively given the real-time nature of the adaptation process.Comment: 11 pages, 10 figures. Paper accepted to IEEE Transactions on Medical Imaging (October 26 2022

    Multiscale medial shape-based analysis of image objects

    Get PDF
    pre-printMedial representation of a three-dimensional (3-D) object or an ensemble of 3-D objects involves capturing the object interior as a locus of medial atoms, each atom being two vectors of equal length joined at the tail at the medial point. Medial representation has a variety of beneficial properties, among the most important of which are 1) its inherent geometry, provides an object-intrinsic coordinate system and thus provides correspondence between instances of the object in and near the object(s); 2) it captures the object interior and is, thus, very suitable for deformation; and 3) it provides the basis for an intuitive object-based multiscale sequence leading to efficiency of segmentation algorithms and trainability of statistical characterizations with limited training sets. As a result of these properties, medial representation is particularly suitable for the following image analysis tasks; how each operates will be described and will be illustrated by results: 1) segmentation of objects and object complexes via deformable models; 2) segmentation of tubular trees, e.g., of blood vessels, by following height ridges of measures of fit of medial atoms to target images; 3) object-based image registration via medial loci of such blood vessel trees; 4) statistical characterization of shape differences between control and pathological classes of structures. These analysis tasks are made possible by a new form of medial representation called m-reps, which is described

    Medical image registration using unsupervised deep neural network: A scoping literature review

    Full text link
    In medicine, image registration is vital in image-guided interventions and other clinical applications. However, it is a difficult subject to be addressed which by the advent of machine learning, there have been considerable progress in algorithmic performance has recently been achieved for medical image registration in this area. The implementation of deep neural networks provides an opportunity for some medical applications such as conducting image registration in less time with high accuracy, playing a key role in countering tumors during the operation. The current study presents a comprehensive scoping review on the state-of-the-art literature of medical image registration studies based on unsupervised deep neural networks is conducted, encompassing all the related studies published in this field to this date. Here, we have tried to summarize the latest developments and applications of unsupervised deep learning-based registration methods in the medical field. Fundamental and main concepts, techniques, statistical analysis from different viewpoints, novelties, and future directions are elaborately discussed and conveyed in the current comprehensive scoping review. Besides, this review hopes to help those active readers, who are riveted by this field, achieve deep insight into this exciting field
    • …
    corecore