21 research outputs found

    CNN-based Lung CT Registration with Multiple Anatomical Constraints

    Full text link
    Deep-learning-based registration methods emerged as a fast alternative to conventional registration methods. However, these methods often still cannot achieve the same performance as conventional registration methods because they are either limited to small deformation or they fail to handle a superposition of large and small deformations without producing implausible deformation fields with foldings inside. In this paper, we identify important strategies of conventional registration methods for lung registration and successfully developed the deep-learning counterpart. We employ a Gaussian-pyramid-based multilevel framework that can solve the image registration optimization in a coarse-to-fine fashion. Furthermore, we prevent foldings of the deformation field and restrict the determinant of the Jacobian to physiologically meaningful values by combining a volume change penalty with a curvature regularizer in the loss function. Keypoint correspondences are integrated to focus on the alignment of smaller structures. We perform an extensive evaluation to assess the accuracy, the robustness, the plausibility of the estimated deformation fields, and the transferability of our registration approach. We show that it achieves state-of-the-art results on the COPDGene dataset compared to conventional registration method with much shorter execution time. In our experiments on the DIRLab exhale to inhale lung registration, we demonstrate substantial improvements (TRE below 1.21.2 mm) over other deep learning methods. Our algorithm is publicly available at https://grand-challenge.org/algorithms/deep-learning-based-ct-lung-registration/

    Learn2Reg: comprehensive multi-task medical image registration challenge, dataset and evaluation in the era of deep learning

    Get PDF
    Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods

    The ULS23 Challenge Public Training Dataset Part 5

    No full text
    <p>This dataset contains part of the imaging data for the <a href="https://uls23.grand-challenge.org/">Universal Lesion Segmentation Challenge (ULS23).</a> It contains lesion volumes-of-interest (VOI's) for part of the weakly annotated DeepLesion data. The annotations are made available through the <a href="https://github.com/MJJdG/ULS23">Challenge repository on GitHub</a>.<br><br>The Universal Lesion Segmentation 2023 (ULS23) data is licensed under CC BY-NC-SA 4.0 </p&gt

    The ULS23 Challenge Public Training Dataset Part 2

    No full text
    <p>This dataset contains part of the imaging data for the <a href="https://uls23.grand-challenge.org/">Universal Lesion Segmentation Challenge (ULS23).</a> It contains lesion volumes-of-interest (VOI's) for previously released data. It consists of 333 kidney lesions from the KiTS21 dataset, 2.246 lung lesion from LIDC-IDRI and 888 liver lesions from the LiTS challenge. The annotations are made available through the <a href="https://github.com/MJJdG/ULS23">Challenge repository on GitHub</a>.<br><br>The Universal Lesion Segmentation 2023 (ULS23) data is licensed under CC BY-NC-SA 4.0 </p&gt

    The ULS23 Challenge Public Training Dataset Part 3

    No full text
    <p>This dataset contains part of the imaging data for the <a href="https://uls23.grand-challenge.org/">Universal Lesion Segmentation Challenge (ULS23).</a> It contains lesion volumes-of-interest (VOI's) for previously released data. It consists of 76 lung lesions from the MDSC_Task06 dataset, 283 pancreas lesion from MDSC_Task07 and 133 colon lesions from MDSC_Task10, 558 abdominal lymph nodes, 379 mediastinal lymph nodes from the NIH-LN dataset. It also contains the weakly annotated CCC18 data, 1.211 lesions, and part of the DeepLesion dataset. The annotations are made available through the <a href="https://github.com/MJJdG/ULS23">Challenge repository on GitHub</a>.<br><br>The Universal Lesion Segmentation 2023 (ULS23) data is licensed under CC BY-NC-SA 4.0 </p&gt

    Lung250M-4B: A Combined 3D Dataset for CT- and Point Cloud-Based Intra-Patient Lung Registration

    No full text
    <p>Point Cloud Data from the Lung250M-4B dataset.</p><p>Visit https://github.com/multimodallearning/Lung250M-4B for image data and associated code.</p&gt
    corecore