803 research outputs found

    To Learn or Not to Learn Features for Deformable Registration?

    Full text link
    Feature-based registration has been popular with a variety of features ranging from voxel intensity to Self-Similarity Context (SSC). In this paper, we examine the question on how features learnt using various Deep Learning (DL) frameworks can be used for deformable registration and whether this feature learning is necessary or not. We investigate the use of features learned by different DL methods in the current state-of-the-art discrete registration framework and analyze its performance on 2 publicly available datasets. We draw insights into the type of DL framework useful for feature learning and the impact, if any, of the complexity of different DL models and brain parcellation methods on the performance of discrete registration. Our results indicate that the registration performance with DL features and SSC are comparable and stable across datasets whereas this does not hold for low level features.Comment: 9 pages, 4 figure

    Keypoint Transfer for Fast Whole-Body Segmentation

    Full text link
    We introduce an approach for image segmentation based on sparse correspondences between keypoints in testing and training images. Keypoints represent automatically identified distinctive image locations, where each keypoint correspondence suggests a transformation between images. We use these correspondences to transfer label maps of entire organs from the training images to the test image. The keypoint transfer algorithm includes three steps: (i) keypoint matching, (ii) voting-based keypoint labeling, and (iii) keypoint-based probabilistic transfer of organ segmentations. We report segmentation results for abdominal organs in whole-body CT and MRI, as well as in contrast-enhanced CT and MRI. Our method offers a speed-up of about three orders of magnitude in comparison to common multi-atlas segmentation, while achieving an accuracy that compares favorably. Moreover, keypoint transfer does not require the registration to an atlas or a training phase. Finally, the method allows for the segmentation of scans with highly variable field-of-view.Comment: Accepted for publication at IEEE Transactions on Medical Imagin

    Inferring Geodesic Cerebrovascular Graphs: Image Processing, Topological Alignment and Biomarkers Extraction

    Get PDF
    A vectorial representation of the vascular network that embodies quantitative features - location, direction, scale, and bifurcations - has many potential neuro-vascular applications. Patient-specific models support computer-assisted surgical procedures in neurovascular interventions, while analyses on multiple subjects are essential for group-level studies on which clinical prediction and therapeutic inference ultimately depend. This first motivated the development of a variety of methods to segment the cerebrovascular system. Nonetheless, a number of limitations, ranging from data-driven inhomogeneities, the anatomical intra- and inter-subject variability, the lack of exhaustive ground-truth, the need for operator-dependent processing pipelines, and the highly non-linear vascular domain, still make the automatic inference of the cerebrovascular topology an open problem. In this thesis, brain vessels’ topology is inferred by focusing on their connectedness. With a novel framework, the brain vasculature is recovered from 3D angiographies by solving a connectivity-optimised anisotropic level-set over a voxel-wise tensor field representing the orientation of the underlying vasculature. Assuming vessels joining by minimal paths, a connectivity paradigm is formulated to automatically determine the vascular topology as an over-connected geodesic graph. Ultimately, deep-brain vascular structures are extracted with geodesic minimum spanning trees. The inferred topologies are then aligned with similar ones for labelling and propagating information over a non-linear vectorial domain, where the branching pattern of a set of vessels transcends a subject-specific quantized grid. Using a multi-source embedding of a vascular graph, the pairwise registration of topologies is performed with the state-of-the-art graph matching techniques employed in computer vision. Functional biomarkers are determined over the neurovascular graphs with two complementary approaches. Efficient approximations of blood flow and pressure drop account for autoregulation and compensation mechanisms in the whole network in presence of perturbations, using lumped-parameters analog-equivalents from clinical angiographies. Also, a localised NURBS-based parametrisation of bifurcations is introduced to model fluid-solid interactions by means of hemodynamic simulations using an isogeometric analysis framework, where both geometry and solution profile at the interface share the same homogeneous domain. Experimental results on synthetic and clinical angiographies validated the proposed formulations. Perspectives and future works are discussed for the group-wise alignment of cerebrovascular topologies over a population, towards defining cerebrovascular atlases, and for further topological optimisation strategies and risk prediction models for therapeutic inference. Most of the algorithms presented in this work are available as part of the open-source package VTrails

    Multi-Atlas based Segmentation of Multi-Modal Brain Images

    Get PDF
    Brain image analysis is playing a fundamental role in clinical and population-based epidemiological studies. Several brain disorder studies involve quantitative interpretation of brain scans and particularly require accurate measurement and delineation of tissue volumes in the scans. Automatic segmentation methods have been proposed to provide reliability and accuracy of the labelling as well as performing an automated procedure. Taking advantage of prior information about the brain's anatomy provided by an atlas as a reference model can help simplify the labelling process. The segmentation in the atlas-based approach will be problematic if the atlas and the target image are not accurately aligned, or if the atlas does not appropriately represent the anatomical structure/region. The accuracy of the segmentation can be improved by utilising a group of atlases. Employing multiple atlases brings about considerable issues in segmenting a new subject's brain image. Registering multiple atlases to the target scan and fusing labels from registered atlases, for a population obtained from different modalities, are challenging tasks: image-intensity comparisons may no longer be valid, since image brightness can have highly diff ering meanings in dfferent modalities. The focus is on the problem of multi-modality and methods are designed and developed to deal with this issue specifically in image registration and label fusion. To deal with multi-modal image registration, two independent approaches are followed. First, a similarity measure is proposed based upon comparing the self-similarity of each of the images to be aligned. Second, two methods are proposed to reduce the multi-modal problem to a mono-modal one by constructing representations not relying on the image intensities. Structural representations work on the basis of using un-decimated complex wavelet representation in one method, and modified approach using entropy in the other one. To handle the cross-modality label fusion, a method is proposed to weight atlases based on atlas-target similarity. The atlas-target similarity is measured by scale-based comparison taking advantage of structural features captured from un-decimated complex wavelet coefficients. The proposed methods are assessed using the simulated and real brain data from computed tomography images and different modes of magnetic resonance images. Experimental results reflect the superiority of the proposed methods over the classical and state-of-the art methods

    A Survey on Deep Learning in Medical Image Registration: New Technologies, Uncertainty, Evaluation Metrics, and Beyond

    Full text link
    Over the past decade, deep learning technologies have greatly advanced the field of medical image registration. The initial developments, such as ResNet-based and U-Net-based networks, laid the groundwork for deep learning-driven image registration. Subsequent progress has been made in various aspects of deep learning-based registration, including similarity measures, deformation regularizations, and uncertainty estimation. These advancements have not only enriched the field of deformable image registration but have also facilitated its application in a wide range of tasks, including atlas construction, multi-atlas segmentation, motion estimation, and 2D-3D registration. In this paper, we present a comprehensive overview of the most recent advancements in deep learning-based image registration. We begin with a concise introduction to the core concepts of deep learning-based image registration. Then, we delve into innovative network architectures, loss functions specific to registration, and methods for estimating registration uncertainty. Additionally, this paper explores appropriate evaluation metrics for assessing the performance of deep learning models in registration tasks. Finally, we highlight the practical applications of these novel techniques in medical imaging and discuss the future prospects of deep learning-based image registration

    Novel Multi-Scale Architecture for Medical Image Registration

    Get PDF
    Medical image registration is an integral component of many medical image analysis pipelines. While registration has conventionally been carried out using optimization techniques, there is growing interest in the application of deep learning to medical image registration. Deep learning based image registration (DLIR) methods have shown mixed results; they are competitive with optimization-based methods for some small-displacement datasets, but struggle to match the performance of optimization-based methods in large displacement settings. This work explores what architectural features can improve network generalization by adopting tried and tested approaches from optical flow literature
    • …
    corecore