550 research outputs found

    Multimodality Biomedical Image Registration Using Free Point Transformer Networks

    Get PDF
    We describe a point-set registration algorithm based on a novel free point transformer (FPT) network, designed for points extracted from multimodal biomedical images for registration tasks, such as those frequently encountered in ultrasound-guided interventional procedures. FPT is constructed with a global feature extractor which accepts unordered source and target point-sets of variable size. The extracted features are conditioned by a shared multilayer perceptron point transformer module to predict a displacement vector for each source point, transforming it into the target space. The point transformer module assumes no vicinity or smoothness in predicting spatial transformation and, together with the global feature extractor, is trained in a data-driven fashion with an unsupervised loss function. In a multimodal registration task using prostate MR and sparsely acquired ultrasound images, FPT yields comparable or improved results over other rigid and non-rigid registration methods. This demonstrates the versatility of FPT to learn registration directly from real, clinical training data and to generalize to a challenging task, such as the interventional application presented

    Multimodality Biomedical Image Registration using Free Point Transformer Networks

    Get PDF
    We describe a point-set registration algorithm based on a novel free point transformer (FPT) network, designed for points extracted from multimodal biomedical images for registration tasks, such as those frequently encountered in ultrasound-guided interventional procedures. FPT is constructed with a global feature extractor which accepts unordered source and target point-sets of variable size. The extracted features are conditioned by a shared multilayer perceptron point transformer module to predict a displacement vector for each source point, transforming it into the target space. The point transformer module assumes no vicinity or smoothness in predicting spatial transformation and, together with the global feature extractor, is trained in a data-driven fashion with an unsupervised loss function. In a multimodal registration task using prostate MR and sparsely acquired ultrasound images, FPT yields comparable or improved results over other rigid and non-rigid registration methods. This demonstrates the versatility of FPT to learn registration directly from real, clinical training data and to generalize to a challenging task, such as the interventional application presented.Comment: 10 pages, 4 figures. Accepted for publication at International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) workshop on Advances in Simplifying Medical UltraSound (ASMUS) 202

    Learning Generalized Non-Rigid Multimodal Biomedical Image Registration from Generic Point Set Data

    Get PDF
    Free Point Transformer (FPT) has been proposed as a data-driven, non-rigid point set registration approach using deep neural networks. As FPT does not assume constraints based on point vicinity or correspondence, it may be trained simply and in a flexible manner by minimizing an unsupervised loss based on the Chamfer Distance. This makes FPT amenable to real-world medical imaging applications where ground-truth deformations may be infeasible to obtain, or in scenarios where only a varying degree of completeness in the point sets to be aligned is available. To test the limit of the correspondence finding ability of FPT and its dependency on training data sets, this work explores the generalizability of the FPT from well-curated non-medical data sets to medical imaging data sets. First, we train FPT on the ModelNet40 dataset to demonstrate its effectiveness and the superior registration performance of FPT over iterative and learning-based point set registration methods. Second, we demonstrate superior performance in rigid and non-rigid registration and robustness to missing data. Last, we highlight the interesting generalizability of the ModelNet-trained FPT by registering reconstructed freehand ultrasound scans of the spine and generic spine models without additional training, whereby the average difference to the ground truth curvatures is 1.3 degrees, across 13 patients.Comment: Accepted to ASMUS 2022 Workshop at MICCA

    Biomedical Image Registration by means of Bacterial Foraging Paradigm

    Get PDF
    Image registration (IR) is the process of geometric overlaying or alignment f two or more 2D/3D images of the same scene (unimodal registration), taken r not at different time slots, from different angles, and/or by different image acquisition ystems (multimodal registration). Technically, image registration implies  complex optimization of different parameters, performed at local or/and global evel. Local optimization methods often fail because functions of the involved metrics ith respect to transformation parameters are generally nonconvex and irregular, and lobal methods are required, at least at the beginning of the procedure. This paper resents a new evolutionary and bio-inspired robust approach for IR, Bacterial Foraging ptimization Algorithm (BFOA), which is adapted for PET-CT multimodal nd magnetic resonance image rigid registration. Results of optimizing the normalized utual information and normalized cross correlation similarity metrics validated he efficacy and precision of the proposed method by using a freely available medical mage database

    A novel automated approach of multi-modality retinal image registration and fusion

    Get PDF
    Biomedical image registration and fusion are usually scene dependent, and require intensive computational effort. A novel automated approach of feature-based control point detection and area-based registration and fusion of retinal images has been successfully designed and developed. The new algorithm, which is reliable and time-efficient, has an automatic adaptation from frame to frame with few tunable threshold parameters. The reference and the to-be-registered images are from two different modalities, i.e. angiogram grayscale images and fundus color images. The relative study of retinal images enhances the information on the fundus image by superimposing information contained in the angiogram image. Through the thesis research, two new contributions have been made to the biomedical image registration and fusion area. The first contribution is the automatic control point detection at the global direction change pixels using adaptive exploratory algorithm. Shape similarity criteria are employed to match the control points. The second contribution is the heuristic optimization algorithm that maximizes Mutual-Pixel-Count (MPC) objective function. The initially selected control points are adjusted during the optimization at the sub-pixel level. A global maxima equivalent result is achieved by calculating MPC local maxima with an efficient computation cost. The iteration stops either when MPC reaches the maximum value, or when the maximum allowable loop count is reached. To our knowledge, it is the first time that the MPC concept has been introduced into biomedical image fusion area as the measurement criteria for fusion accuracy. The fusion image is generated based on the current control point coordinates when the iteration stops. The comparative study of the presented automatic registration and fusion scheme against Centerline Control Point Detection Algorithm, Genetic Algorithm, RMSE objective function, and other existing data fusion approaches has shown the advantage of the new approach in terms of accuracy, efficiency, and novelty

    Registration of brain tumor images using hyper-elastic regularization

    Get PDF
    In this paper, we present a method to estimate a deformation field between two instances of a brain volume having tumor. The novelties include the assessment of the disease progress by observing the healthy tissue deformation and usage of the Neo-Hookean strain energy density model as a regularizer in deformable registration framework. Implementations on synthetic and patient data provide promising results, which might have relevant use in clinical problems

    Mitosis domain generalization in histopathology images -- The MIDOG challenge

    Full text link
    The density of mitotic figures within tumor tissue is known to be highly correlated with tumor proliferation and thus is an important marker in tumor grading. Recognition of mitotic figures by pathologists is known to be subject to a strong inter-rater bias, which limits the prognostic value. State-of-the-art deep learning methods can support the expert in this assessment but are known to strongly deteriorate when applied in a different clinical environment than was used for training. One decisive component in the underlying domain shift has been identified as the variability caused by using different whole slide scanners. The goal of the MICCAI MIDOG 2021 challenge has been to propose and evaluate methods that counter this domain shift and derive scanner-agnostic mitosis detection algorithms. The challenge used a training set of 200 cases, split across four scanning systems. As a test set, an additional 100 cases split across four scanning systems, including two previously unseen scanners, were given. The best approaches performed on an expert level, with the winning algorithm yielding an F_1 score of 0.748 (CI95: 0.704-0.781). In this paper, we evaluate and compare the approaches that were submitted to the challenge and identify methodological factors contributing to better performance.Comment: 19 pages, 9 figures, summary paper of the 2021 MICCAI MIDOG challeng
    corecore