3,597 research outputs found

    Automatic Segmentation of the Mandible for Three-Dimensional Virtual Surgical Planning

    Get PDF
    Three-dimensional (3D) medical imaging techniques have a fundamental role in the field of oral and maxillofacial surgery (OMFS). 3D images are used to guide diagnosis, assess the severity of disease, for pre-operative planning, per-operative guidance and virtual surgical planning (VSP). In the field of oral cancer, where surgical resection requiring the partial removal of the mandible is a common treatment, resection surgery is often based on 3D VSP to accurately design a resection plan around tumor margins. In orthognathic surgery and dental implant surgery, 3D VSP is also extensively used to precisely guide mandibular surgery. Image segmentation from the radiography images of the head and neck, which is a process to create a 3D volume of the target tissue, is a useful tool to visualize the mandible and quantify geometric parameters. Studies have shown that 3D VSP requires accurate segmentation of the mandible, which is currently performed by medical technicians. Mandible segmentation was usually done manually, which is a time-consuming and poorly reproducible process. This thesis presents four algorithms for mandible segmentation from CT and CBCT and contributes to some novel ideas for the development of automatic mandible segmentation for 3D VSP. We implement the segmentation approaches on head and neck CT/CBCT datasets and then evaluate the performance. Experimental results show that our proposed approaches for mandible segmentation in CT/CBCT datasets exhibit high accuracy

    An Investigation of Methods for CT Synthesis in MR-only Radiotherapy

    Get PDF

    A comparative evaluation of 3 different free-form deformable image registration and contour propagation methods for head and neck MRI : the case of parotid changes radiotherapy

    Get PDF
    Purpose: To validate and compare the deformable image registration and parotid contour propagation process for head and neck magnetic resonance imaging in patients treated with radiotherapy using 3 different approachesthe commercial MIM, the open-source Elastix software, and an optimized version of it. Materials and Methods: Twelve patients with head and neck cancer previously treated with radiotherapy were considered. Deformable image registration and parotid contour propagation were evaluated by considering the magnetic resonance images acquired before and after the end of the treatment. Deformable image registration, based on free-form deformation method, and contour propagation available on MIM were compared to Elastix. Two different contour propagation approaches were implemented for Elastix software, a conventional one (DIR_Trx) and an optimized homemade version, based on mesh deformation (DIR_Mesh). The accuracy of these 3 approaches was estimated by comparing propagated to manual contours in terms of average symmetric distance, maximum symmetric distance, Dice similarity coefficient, sensitivity, and inclusiveness. Results: A good agreement was generally found between the manual contours and the propagated ones, without differences among the 3 methods; in few critical cases with complex deformations, DIR_Mesh proved to be more accurate, having the lowest values of average symmetric distance and maximum symmetric distance and the highest value of Dice similarity coefficient, although nonsignificant. The average propagation errors with respect to the reference contours are lower than the voxel diagonal (2 mm), and Dice similarity coefficient is around 0.8 for all 3 methods. Conclusion: The 3 free-form deformation approaches were not significantly different in terms of deformable image registration accuracy and can be safely adopted for the registration and parotid contour propagation during radiotherapy on magnetic resonance imaging. More optimized approaches (as DIR_Mesh) could be preferable for critical deformations

    Multiparametric Magnetic Resonance Imaging Artificial Intelligence Pipeline for Oropharyngeal Cancer Radiotherapy Treatment Guidance

    Get PDF
    Oropharyngeal cancer (OPC) is a widespread disease and one of the few domestic cancers that is rising in incidence. Radiographic images are crucial for assessment of OPC and aid in radiotherapy (RT) treatment. However, RT planning with conventional imaging approaches requires operator-dependent tumor segmentation, which is the primary source of treatment error. Further, OPC expresses differential tumor/node mid-RT response (rapid response) rates, resulting in significant differences between planned and delivered RT dose. Finally, clinical outcomes for OPC patients can also be variable, which warrants the investigation of prognostic models. Multiparametric MRI (mpMRI) techniques that incorporate simultaneous anatomical and functional information coupled to artificial intelligence (AI) approaches could improve clinical decision support for OPC by providing immediately actionable clinical rationale for adaptive RT planning. If tumors could be reproducibly segmented, rapid response could be classified, and prognosis could be reliably determined, overall patient outcomes would be optimized to improve the therapeutic index as a function of more risk-adapted RT volumes. Consequently, there is an unmet need for automated and reproducible imaging which can simultaneously segment tumors and provide predictive value for actionable RT adaptation. This dissertation primarily seeks to explore and optimize image processing, tumor segmentation, and patient outcomes in OPC through a combination of advanced imaging techniques and AI algorithms. In the first specific aim of this dissertation, we develop and evaluate mpMRI pre-processing techniques for use in downstream segmentation, response prediction, and outcome prediction pipelines. Various MRI intensity standardization and registration approaches were systematically compared and benchmarked. Moreover, synthetic image algorithms were developed to decrease MRI scan time in an effort to optimize our AI pipelines. We demonstrated that proper intensity standardization and image registration can improve mpMRI quality for use in AI algorithms, and developed a novel method to decrease mpMRI acquisition time. Subsequently, in the second specific aim of this dissertation, we investigated underlying questions regarding the implementation of RT-related auto-segmentation. Firstly, we quantified interobserver variability for an unprecedented large number of observers for various radiotherapy structures in several disease sites (with a particular emphasis on OPC) using a novel crowdsourcing platform. We then trained an AI algorithm on a series of extant matched mpMRI datasets to segment OPC primary tumors. Moreover, we validated and compared our best model\u27s performance to clinical expert observers. We demonstrated that AI-based mpMRI OPC tumor auto-segmentation offers decreased variability and comparable accuracy to clinical experts, and certain mpMRI input channel combinations could further improve performance. Finally, in the third specific aim of this dissertation, we predicted OPC primary tumor mid-therapy (rapid) treatment response and prognostic outcomes. Using co-registered pre-therapy and mid-therapy primary tumor manual segmentations of OPC patients, we generated and characterized treatment sensitive and treatment resistant pre-RT sub-volumes. These sub-volumes were used to train an AI algorithm to predict individual voxel-wise treatment resistance. Additionally, we developed an AI algorithm to predict OPC patient progression free survival using pre-therapy imaging from an international data science competition (ranking 1st place), and then translated these approaches to mpMRI data. We demonstrated AI models could be used to predict rapid response and prognostic outcomes using pre-therapy imaging, which could help guide treatment adaptation, though further work is needed. In summary, the completion of these aims facilitates the development of an image-guided fully automated OPC clinical decision support tool. The resultant deliverables from this project will positively impact patients by enabling optimized therapeutic interventions in OPC. Future work should consider investigating additional imaging timepoints, imaging modalities, uncertainty quantification, perceptual and ethical considerations, and prospective studies for eventual clinical implementation. A dynamic version of this dissertation is publicly available and assigned a digital object identifier through Figshare (doi: 10.6084/m9.figshare.22141871)

    Computational Anatomy for Multi-Organ Analysis in Medical Imaging: A Review

    Full text link
    The medical image analysis field has traditionally been focused on the development of organ-, and disease-specific methods. Recently, the interest in the development of more 20 comprehensive computational anatomical models has grown, leading to the creation of multi-organ models. Multi-organ approaches, unlike traditional organ-specific strategies, incorporate inter-organ relations into the model, thus leading to a more accurate representation of the complex human anatomy. Inter-organ relations are not only spatial, but also functional and physiological. Over the years, the strategies 25 proposed to efficiently model multi-organ structures have evolved from the simple global modeling, to more sophisticated approaches such as sequential, hierarchical, or machine learning-based models. In this paper, we present a review of the state of the art on multi-organ analysis and associated computation anatomy methodology. The manuscript follows a methodology-based classification of the different techniques 30 available for the analysis of multi-organs and multi-anatomical structures, from techniques using point distribution models to the most recent deep learning-based approaches. With more than 300 papers included in this review, we reflect on the trends and challenges of the field of computational anatomy, the particularities of each anatomical region, and the potential of multi-organ analysis to increase the impact of 35 medical imaging applications on the future of healthcare.Comment: Paper under revie

    Recent Applications of Artificial Intelligence in Radiotherapy: Where We Are and Beyond

    Get PDF
    In recent decades, artificial intelligence (AI) tools have been applied in many medical fields, opening the possibility of finding novel solutions for managing very complex and multifactorial problems, such as those commonly encountered in radiotherapy (RT). We conducted a PubMed and Scopus search to identify the AI application field in RT limited to the last four years. In total, 1824 original papers were identified, and 921 were analyzed by considering the phase of the RT workflow according to the applied AI approaches. AI permits the processing of large quantities of information, data, and images stored in RT oncology information systems, a process that is not manageable for individuals or groups. AI allows the iterative application of complex tasks in large datasets (e.g., delineating normal tissues or finding optimal planning solutions) and might support the entire community working in the various sectors of RT, as summarized in this overview. AI-based tools are now on the roadmap for RT and have been applied to the entire workflow, mainly for segmentation, the generation of synthetic images, and outcome prediction. Several concerns were raised, including the need for harmonization while overcoming ethical, legal, and skill barriers

    A generative adversarial network approach to synthetic-CT creation for MRI-based radiation therapy

    Get PDF
    Tese de mestrado integrado, Engenharia Biomédica e Biofísica (Radiações em Diagnóstico e Terapia), Universidade de Lisboa, Faculdade de Ciências, 2019This project presents the application of a generative adversarial network (GAN) to the creation of synthetic computed tomography (sCT) scans from volumetric T1-weighted magnetic resonance imaging (MRI), for dose calculation in MRI-based radio therapy workflows. A 3-dimensional GAN for MRI-to-CT synthesis was developed based on a 2-dimensional architecture for image-content transfer. Co-registered CT and T1 –weighted MRI scans of the head region were used for training. Tuning of the network was performed with a 7-foldcross-validation method on 42 patients. A second data set of 12 patients was used as the hold out data set for final validation. The performance of the GAN was assessed with image quality metrics, and dosimetric evaluation was performed for 33 patients by comparing dose distributions calculated on true and synthetic CT, for photon and proton therapy plans. sCT generation time was <30s per patient. The mean absolute error (MAE) between sCT and CT on the cross-validation data set was69 ± 10 HU, corresponding to a 20% decrease in error when compared to training on the original 2D GAN. Quality metric results did not differ statistically for the hold out data set (p = 0.09). Higher errors were observed for air and bone voxels, and registration errors between CT and MRI decreased performance of the algorithm. Dose deviations at the target were within 2% for the photon beams; for the proton plans, 21 patients showed dose deviations under 2%, while 12 had deviations between 2% and 8%. Pass rates (2%/ 2mm) between dose distributions were higher than 98% and 94% for photon and proton plans respectively. The results compare favorably with published algorithms and the method shows potential for MRI-guided clinical workflows. Special attention should be given when beams cross small structures and airways, and further adjustments to the algorithm should be made to increase performance for these regions
    • …
    corecore