38 research outputs found

    Deformable Registration through Learning of Context-Specific Metric Aggregation

    Full text link
    We propose a novel weakly supervised discriminative algorithm for learning context specific registration metrics as a linear combination of conventional similarity measures. Conventional metrics have been extensively used over the past two decades and therefore both their strengths and limitations are known. The challenge is to find the optimal relative weighting (or parameters) of different metrics forming the similarity measure of the registration algorithm. Hand-tuning these parameters would result in sub optimal solutions and quickly become infeasible as the number of metrics increases. Furthermore, such hand-crafted combination can only happen at global scale (entire volume) and therefore will not be able to account for the different tissue properties. We propose a learning algorithm for estimating these parameters locally, conditioned to the data semantic classes. The objective function of our formulation is a special case of non-convex function, difference of convex function, which we optimize using the concave convex procedure. As a proof of concept, we show the impact of our approach on three challenging datasets for different anatomical structures and modalities.Comment: Accepted for publication in the 8th International Workshop on Machine Learning in Medical Imaging (MLMI 2017), in conjunction with MICCAI 201

    iRegNet: Non-rigid Registration of MRI to Interventional US for Brain-Shift Compensation using Convolutional Neural Networks

    Get PDF
    Accurate and safe neurosurgical intervention can be affected by intra-operative tissue deformation, known as brain-shift. In this study, we propose an automatic, fast, and accurate deformable method, called iRegNet, for registering pre-operative magnetic resonance images to intra-operative ultrasound volumes to compensate for brain-shift. iRegNet is a robust end-to-end deep learning approach for the non-linear registration of MRI-iUS images in the context of image-guided neurosurgery. Pre-operative MRI (as moving image) and iUS (as fixed image) are first appended to our convolutional neural network, after which a non-rigid transformation field is estimated. The MRI image is then transformed using the output displacement field to the iUS coordinate system. Extensive experiments have been conducted on two multi-location databases, which are the BITE and the RESECT. Quantitatively, iRegNet reduced the mean landmark errors from pre-registration value of (4.18 ± 1.84 and 5.35 ± 4.19 mm) to the lowest value of (1.47 ± 0.61 and 0.84 ± 0.16 mm) for the BITE and RESECT datasets, respectively. Additional qualitative validation of this study was conducted by two expert neurosurgeons through overlaying MRI-iUS pairs before and after the deformable registration. Experimental findings show that our proposed iRegNet is fast and achieves state-of-the-art accuracies outperforming state-of-the-art approaches. Furthermore, the proposed iRegNet can deliver competitive results, even in the case of non-trained images as proof of its generality and can therefore be valuable in intra-operative neurosurgical guidance

    A Survey on Deep Learning in Medical Image Registration: New Technologies, Uncertainty, Evaluation Metrics, and Beyond

    Full text link
    Over the past decade, deep learning technologies have greatly advanced the field of medical image registration. The initial developments, such as ResNet-based and U-Net-based networks, laid the groundwork for deep learning-driven image registration. Subsequent progress has been made in various aspects of deep learning-based registration, including similarity measures, deformation regularizations, and uncertainty estimation. These advancements have not only enriched the field of deformable image registration but have also facilitated its application in a wide range of tasks, including atlas construction, multi-atlas segmentation, motion estimation, and 2D-3D registration. In this paper, we present a comprehensive overview of the most recent advancements in deep learning-based image registration. We begin with a concise introduction to the core concepts of deep learning-based image registration. Then, we delve into innovative network architectures, loss functions specific to registration, and methods for estimating registration uncertainty. Additionally, this paper explores appropriate evaluation metrics for assessing the performance of deep learning models in registration tasks. Finally, we highlight the practical applications of these novel techniques in medical imaging and discuss the future prospects of deep learning-based image registration

    Neurosurgical Ultrasound Pose Estimation Using Image-Based Registration and Sensor Fusion - A Feasibility Study

    Get PDF
    Modern neurosurgical procedures often rely on computer-assisted real-time guidance using multiple medical imaging modalities. State-of-the-art commercial products enable the fusion of pre-operative with intra-operative images (e.g., magnetic resonance [MR] with ultrasound [US] images), as well as the on-screen visualization of procedures in progress. In so doing, US images can be employed as a template to which pre-operative images can be registered, to correct for anatomical changes, to provide live-image feedback, and consequently to improve confidence when making resection margin decisions near eloquent regions during tumour surgery. In spite of the potential for tracked ultrasound to improve many neurosurgical procedures, it is not widely used. State-of-the-art systems are handicapped by optical tracking’s need for consistent line-of-sight, keeping tracked rigid bodies clean and rigidly fixed, and requiring a calibration workflow. The goal of this work is to improve the value offered by co-registered ultrasound images without the workflow drawbacks of conventional systems. The novel work in this thesis includes: the exploration and development of a GPU-enabled 2D-3D multi-modal registration algorithm based on the existing LC2 metric; and the use of this registration algorithm in the context of a sensor and image-fusion algorithm. The work presented here is a motivating step in a vision towards a heterogeneous tracking framework for image-guided interventions where the knowledge from intraoperative imaging, pre-operative imaging, and (potentially disjoint) wireless sensors in the surgical field are seamlessly integrated for the benefit of the surgeon. The technology described in this thesis, inspired by advances in robot localization demonstrate how inaccurate pose data from disjoint sources can produce a localization system greater than the sum of its parts

    Multi-Atlas based Segmentation of Multi-Modal Brain Images

    Get PDF
    Brain image analysis is playing a fundamental role in clinical and population-based epidemiological studies. Several brain disorder studies involve quantitative interpretation of brain scans and particularly require accurate measurement and delineation of tissue volumes in the scans. Automatic segmentation methods have been proposed to provide reliability and accuracy of the labelling as well as performing an automated procedure. Taking advantage of prior information about the brain's anatomy provided by an atlas as a reference model can help simplify the labelling process. The segmentation in the atlas-based approach will be problematic if the atlas and the target image are not accurately aligned, or if the atlas does not appropriately represent the anatomical structure/region. The accuracy of the segmentation can be improved by utilising a group of atlases. Employing multiple atlases brings about considerable issues in segmenting a new subject's brain image. Registering multiple atlases to the target scan and fusing labels from registered atlases, for a population obtained from different modalities, are challenging tasks: image-intensity comparisons may no longer be valid, since image brightness can have highly diff ering meanings in dfferent modalities. The focus is on the problem of multi-modality and methods are designed and developed to deal with this issue specifically in image registration and label fusion. To deal with multi-modal image registration, two independent approaches are followed. First, a similarity measure is proposed based upon comparing the self-similarity of each of the images to be aligned. Second, two methods are proposed to reduce the multi-modal problem to a mono-modal one by constructing representations not relying on the image intensities. Structural representations work on the basis of using un-decimated complex wavelet representation in one method, and modified approach using entropy in the other one. To handle the cross-modality label fusion, a method is proposed to weight atlases based on atlas-target similarity. The atlas-target similarity is measured by scale-based comparison taking advantage of structural features captured from un-decimated complex wavelet coefficients. The proposed methods are assessed using the simulated and real brain data from computed tomography images and different modes of magnetic resonance images. Experimental results reflect the superiority of the proposed methods over the classical and state-of-the art methods

    Modeling the Biological Diversity of Pig Carcasses

    Get PDF
    corecore