925 research outputs found

    A comparative evaluation of 3 different free-form deformable image registration and contour propagation methods for head and neck MRI : the case of parotid changes radiotherapy

    Get PDF
    Purpose: To validate and compare the deformable image registration and parotid contour propagation process for head and neck magnetic resonance imaging in patients treated with radiotherapy using 3 different approachesthe commercial MIM, the open-source Elastix software, and an optimized version of it. Materials and Methods: Twelve patients with head and neck cancer previously treated with radiotherapy were considered. Deformable image registration and parotid contour propagation were evaluated by considering the magnetic resonance images acquired before and after the end of the treatment. Deformable image registration, based on free-form deformation method, and contour propagation available on MIM were compared to Elastix. Two different contour propagation approaches were implemented for Elastix software, a conventional one (DIR_Trx) and an optimized homemade version, based on mesh deformation (DIR_Mesh). The accuracy of these 3 approaches was estimated by comparing propagated to manual contours in terms of average symmetric distance, maximum symmetric distance, Dice similarity coefficient, sensitivity, and inclusiveness. Results: A good agreement was generally found between the manual contours and the propagated ones, without differences among the 3 methods; in few critical cases with complex deformations, DIR_Mesh proved to be more accurate, having the lowest values of average symmetric distance and maximum symmetric distance and the highest value of Dice similarity coefficient, although nonsignificant. The average propagation errors with respect to the reference contours are lower than the voxel diagonal (2 mm), and Dice similarity coefficient is around 0.8 for all 3 methods. Conclusion: The 3 free-form deformation approaches were not significantly different in terms of deformable image registration accuracy and can be safely adopted for the registration and parotid contour propagation during radiotherapy on magnetic resonance imaging. More optimized approaches (as DIR_Mesh) could be preferable for critical deformations

    Efficient Active Learning for Image Classification and Segmentation using a Sample Selection and Conditional Generative Adversarial Network

    Get PDF
    Training robust deep learning (DL) systems for medical image classification or segmentation is challenging due to limited images covering different disease types and severity. We propose an active learning (AL) framework to select most informative samples and add to the training data. We use conditional generative adversarial networks (cGANs) to generate realistic chest xray images with different disease characteristics by conditioning its generation on a real image sample. Informative samples to add to the training set are identified using a Bayesian neural network. Experiments show our proposed AL framework is able to achieve state of the art performance by using about 35% of the full dataset, thus saving significant time and effort over conventional methods

    Applications of a Biomechanical Patient Model for Adaptive Radiation Therapy

    Get PDF
    Biomechanical patient modeling incorporates physical knowledge of the human anatomy into the image processing that is required for tracking anatomical deformations during adaptive radiation therapy, especially particle therapy. In contrast to standard image registration, this enforces bio-fidelic image transformation. In this thesis, the potential of a kinematic skeleton model and soft tissue motion propagation are investigated for crucial image analysis steps in adaptive radiation therapy. The first application is the integration of the kinematic model in a deformable image registration process (KinematicDIR). For monomodal CT scan pairs, the median target registration error based on skeleton landmarks, is smaller than (1.6 ± 0.2) mm. In addition, the successful transferability of this concept to otherwise challenging multimodal registration between CT and CBCT as well as CT and MRI scan pairs is shown to result in median target registration error in the order of 2 mm. This meets the accuracy requirement for adaptive radiation therapy and is especially interesting for MR-guided approaches. Another aspect, emerging in radiotherapy, is the utilization of deep-learning-based organ segmentation. As radiotherapy-specific labeled data is scarce, the training of such methods relies heavily on augmentation techniques. In this work, the generation of synthetically but realistically deformed scans used as Bionic Augmentation in the training phase improved the predicted segmentations by up to 15% in the Dice similarity coefficient, depending on the training strategy. Finally, it is shown that the biomechanical model can be built-up from automatic segmentations without deterioration of the KinematicDIR application. This is essential for use in a clinical workflow

    Real-time multimodal image registration with partial intraoperative point-set data

    Get PDF
    We present Free Point Transformer (FPT) - a deep neural network architecture for non-rigid point-set registration. Consisting of two modules, a global feature extraction module and a point transformation module, FPT does not assume explicit constraints based on point vicinity, thereby overcoming a common requirement of previous learning-based point-set registration methods. FPT is designed to accept unordered and unstructured point-sets with a variable number of points and uses a "model-free" approach without heuristic constraints. Training FPT is flexible and involves minimizing an intuitive unsupervised loss function, but supervised, semi-supervised, and partially- or weakly-supervised training are also supported. This flexibility makes FPT amenable to multimodal image registration problems where the ground-truth deformations are difficult or impossible to measure. In this paper, we demonstrate the application of FPT to non-rigid registration of prostate magnetic resonance (MR) imaging and sparsely-sampled transrectal ultrasound (TRUS) images. The registration errors were 4.71 mm and 4.81 mm for complete TRUS imaging and sparsely-sampled TRUS imaging, respectively. The results indicate superior accuracy to the alternative rigid and non-rigid registration algorithms tested and substantially lower computation time. The rapid inference possible with FPT makes it particularly suitable for applications where real-time registration is beneficial
    • …
    corecore