3,599 research outputs found

    Deep Boosted Regression for MR to CT Synthesis

    Get PDF
    Attenuation correction is an essential requirement of positron emission tomography (PET) image reconstruction to allow for accurate quantification. However, attenuation correction is particularly challenging for PET-MRI as neither PET nor magnetic resonance imaging (MRI) can directly image tissue attenuation properties. MRI-based computed tomography (CT) synthesis has been proposed as an alternative to physics based and segmentation-based approaches that assign a population-based tissue density value in order to generate an attenuation map. We propose a novel deep fully convolutional neural network that generates synthetic CTs in a recursive manner by gradually reducing the residuals of the previous network, increasing the overall accuracy and generalisability, while keeping the number of trainable parameters within reasonable limits. The model is trained on a database of 20 pre-acquired MRI/CT pairs and a four-fold random bootstrapped validation with a 80:20 split is performed. Quantitative results show that the proposed framework outperforms a state-of-the-art atlas-based approach decreasing the Mean Absolute Error (MAE) from 131HU to 68HU for the synthetic CTs and reducing the PET reconstruction error from 14.3% to 7.2%.Comment: Accepted at SASHIMI201

    NiftyNet: a deep-learning platform for medical imaging

    Get PDF
    Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this application requires substantial implementation effort. Thus, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. NiftyNet provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D and 3D images and computational graphs by default. We present 3 illustrative medical image analysis applications built using NiftyNet: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. NiftyNet enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications.Comment: Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6 figures; Update includes additional applications, updated author list and formatting for journal submissio

    Performance of deep learning synthetic CTs for MR-only brain radiation therapy

    Get PDF
    PURPOSE: To evaluate the dosimetric and image-guided radiation therapy (IGRT) performance of a novel generative adversarial network (GAN) generated synthetic CT (synCT) in the brain and compare its performance for clinical use including conventional brain radiotherapy, cranial stereotactic radiosurgery (SRS), planar, and volumetric IGRT. METHODS AND MATERIALS: SynCT images for 12 brain cancer patients (6 SRS, 6 conventional) were generated from T1-weighted postgadolinium magnetic resonance (MR) images by applying a GAN model with a residual network (ResNet) generator and a convolutional neural network (CNN) with 5 convolutional layers as the discriminator that classified input images as real or synthetic. Following rigid registration, clinical structures and treatment plans derived from simulation CT (simCT) images were transferred to synCTs. Dose was recalculated for 15 simCT/synCT plan pairs using fixed monitor units. Two-dimensional (2D) gamma analysis (2%/2 mm, 1%/1 mm) was performed to compare dose distributions at isocenter. Dose-volume histogram (DVH) metrics (D(95%) , D(99%) , D(0.2cc,) and D(0.035cc) ) were assessed for the targets and organ at risks (OARs). IGRT performance was evaluated via volumetric registration between cone beam CT (CBCT) to synCT/simCT and planar registration between KV images to synCT/simCT digital reconstructed radiographs (DRRs). RESULTS: Average gamma passing rates at 1%/1mm and 2%/2mm were 99.0 ยฑ 1.5% and 99.9 ยฑ 0.2%, respectively. Excellent agreement in DVH metrics was observed (mean difference โ‰ค0.10 ยฑ 0.04 Gy for targets, 0.13 ยฑ 0.04 Gy for OARs). The population averaged mean difference in CBCT-synCT registrations were \u3c0.2 mm and 0.1 degree different from simCT-based registrations. The mean difference between kV-synCT DRR and kV-simCT DRR registrations was \u3c0.5 mm with no statistically significant differences observed (P \u3e 0.05). An outlier with a large resection cavity exhibited the worst-case scenario. CONCLUSION: Brain GAN synCTs demonstrated excellent performance for dosimetric and IGRT endpoints, offering potential use in high precision brain cancer therapy

    Multiparametric Magnetic Resonance Imaging Artificial Intelligence Pipeline for Oropharyngeal Cancer Radiotherapy Treatment Guidance

    Get PDF
    Oropharyngeal cancer (OPC) is a widespread disease and one of the few domestic cancers that is rising in incidence. Radiographic images are crucial for assessment of OPC and aid in radiotherapy (RT) treatment. However, RT planning with conventional imaging approaches requires operator-dependent tumor segmentation, which is the primary source of treatment error. Further, OPC expresses differential tumor/node mid-RT response (rapid response) rates, resulting in significant differences between planned and delivered RT dose. Finally, clinical outcomes for OPC patients can also be variable, which warrants the investigation of prognostic models. Multiparametric MRI (mpMRI) techniques that incorporate simultaneous anatomical and functional information coupled to artificial intelligence (AI) approaches could improve clinical decision support for OPC by providing immediately actionable clinical rationale for adaptive RT planning. If tumors could be reproducibly segmented, rapid response could be classified, and prognosis could be reliably determined, overall patient outcomes would be optimized to improve the therapeutic index as a function of more risk-adapted RT volumes. Consequently, there is an unmet need for automated and reproducible imaging which can simultaneously segment tumors and provide predictive value for actionable RT adaptation. This dissertation primarily seeks to explore and optimize image processing, tumor segmentation, and patient outcomes in OPC through a combination of advanced imaging techniques and AI algorithms. In the first specific aim of this dissertation, we develop and evaluate mpMRI pre-processing techniques for use in downstream segmentation, response prediction, and outcome prediction pipelines. Various MRI intensity standardization and registration approaches were systematically compared and benchmarked. Moreover, synthetic image algorithms were developed to decrease MRI scan time in an effort to optimize our AI pipelines. We demonstrated that proper intensity standardization and image registration can improve mpMRI quality for use in AI algorithms, and developed a novel method to decrease mpMRI acquisition time. Subsequently, in the second specific aim of this dissertation, we investigated underlying questions regarding the implementation of RT-related auto-segmentation. Firstly, we quantified interobserver variability for an unprecedented large number of observers for various radiotherapy structures in several disease sites (with a particular emphasis on OPC) using a novel crowdsourcing platform. We then trained an AI algorithm on a series of extant matched mpMRI datasets to segment OPC primary tumors. Moreover, we validated and compared our best model\u27s performance to clinical expert observers. We demonstrated that AI-based mpMRI OPC tumor auto-segmentation offers decreased variability and comparable accuracy to clinical experts, and certain mpMRI input channel combinations could further improve performance. Finally, in the third specific aim of this dissertation, we predicted OPC primary tumor mid-therapy (rapid) treatment response and prognostic outcomes. Using co-registered pre-therapy and mid-therapy primary tumor manual segmentations of OPC patients, we generated and characterized treatment sensitive and treatment resistant pre-RT sub-volumes. These sub-volumes were used to train an AI algorithm to predict individual voxel-wise treatment resistance. Additionally, we developed an AI algorithm to predict OPC patient progression free survival using pre-therapy imaging from an international data science competition (ranking 1st place), and then translated these approaches to mpMRI data. We demonstrated AI models could be used to predict rapid response and prognostic outcomes using pre-therapy imaging, which could help guide treatment adaptation, though further work is needed. In summary, the completion of these aims facilitates the development of an image-guided fully automated OPC clinical decision support tool. The resultant deliverables from this project will positively impact patients by enabling optimized therapeutic interventions in OPC. Future work should consider investigating additional imaging timepoints, imaging modalities, uncertainty quantification, perceptual and ethical considerations, and prospective studies for eventual clinical implementation. A dynamic version of this dissertation is publicly available and assigned a digital object identifier through Figshare (doi: 10.6084/m9.figshare.22141871)

    Region of Interest focused MRI to Synthetic CT Translation using Regression and Classification Multi-task Network

    Full text link
    In this work, we present a method for synthetic CT (sCT) generation from zero-echo-time (ZTE) MRI aimed at structural and quantitative accuracies of the image, with a particular focus on the accurate bone density value prediction. We propose a loss function that favors a spatially sparse region in the image. We harness the ability of a multi-task network to produce correlated outputs as a framework to enable localisation of region of interest (RoI) via classification, emphasize regression of values within RoI and still retain the overall accuracy via global regression. The network is optimized by a composite loss function that combines a dedicated loss from each task. We demonstrate how the multi-task network with RoI focused loss offers an advantage over other configurations of the network to achieve higher accuracy of performance. This is relevant to sCT where failure to accurately estimate high Hounsfield Unit values of bone could lead to impaired accuracy in clinical applications. We compare the dose calculation maps from the proposed sCT and the real CT in a radiation therapy treatment planning setup

    PET/MRI ๋ฐ MR-IGRT๋ฅผ ์œ„ํ•œ MRI ๊ธฐ๋ฐ˜ ํ•ฉ์„ฑ CT ์ƒ์„ฑ์˜ ํƒ€๋‹น์„ฑ ์—ฐ๊ตฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์˜๊ณผ๋Œ€ํ•™ ์˜๊ณผํ•™๊ณผ, 2020. 8. ์ด์žฌ์„ฑ.Over the past decade, the application of magnetic resonance imaging (MRI) in the field of diagnosis and treatment has increased. MRI provides higher soft-tissue contrast, especially in the brain, abdominal organ, and bone marrow without the expose of ionizing radiation. Hence, simultaneous positron emission tomography/MR (PET/MR) system and MR-image guided radiation therapy (MR-IGRT) system has recently been emerged and currently available for clinical study. One major issue in PET/MR system is attenuation correction from MRI scans for PET quantification and a similar need for the assignment of electron densities to MRI scans for dose calculation can be found in MR-IGRT system. Because the MR signals are related to the proton density and relaxation properties of tissue, not to electron density. To overcome this problem, the method called synthetic CT (sCT), a pseudo CT derived from MR images, has been proposed. In this thesis, studies on generating synthetic CT and investigating the feasibility of using a MR-based synthetic CT for diagnostic and radiotherapy application were presented. Firstly, MR image-based attenuation correction (MR-AC) method using level-set segmentation for brain PET/MRI was developed. To resolve conventional inaccuracy MR-AC problem, we proposed an improved ultrashort echo time MR-AC method that was based on a multiphase level-set algorithm with main magnetic field inhomogeneity correction. We also assessed the feasibility of level-set based MR-AC method, compared with CT-AC and MR-AC provided by the manufacturer of the PET/MRI scanner. Secondly, we proposed sCT generation from the low field MR images using 2D convolution neural network model for MR-IGRT system. This sCT images were compared to the deformed CT generated using the deformable registration being used in the current system. We assessed the feasibility of using sCT for radiation treatment planning from each of the patients with pelvic, thoraic and abdominal region through geometric and dosimetric evaluation.์ง€๋‚œ 10๋…„๊ฐ„ ์ง„๋‹จ ๋ฐ ์น˜๋ฃŒ๋ถ„์•ผ์—์„œ ์ž๊ธฐ๊ณต๋ช…์˜์ƒ(Magnetic resonance imaging; MRI) ์˜ ์ ์šฉ์ด ์ฆ๊ฐ€ํ•˜์˜€๋‹ค. MRI๋Š” CT์™€ ๋น„๊ตํ•ด ์ถ”๊ฐ€์ ์ธ ์ „๋ฆฌ๋ฐฉ์‚ฌ์„ ์˜ ํ”ผํญ์—†์ด ๋‡Œ, ๋ณต๋ถ€ ๊ธฐ๊ด€ ๋ฐ ๊ณจ์ˆ˜ ๋“ฑ์—์„œ ๋” ๋†’์€ ์—ฐ์กฐ์ง ๋Œ€๋น„๋ฅผ ์ œ๊ณตํ•œ๋‹ค. ๋”ฐ๋ผ์„œ MRI๋ฅผ ์ ์šฉํ•œ ์–‘์ „์ž๋ฐฉ์ถœ๋‹จ์ธต์ดฌ์˜(Positron emission tomography; PET)/MR ์‹œ์Šคํ…œ๊ณผ MR ์˜์ƒ ์œ ๋„ ๋ฐฉ์‚ฌ์„  ์น˜๋ฃŒ ์‹œ์Šคํ…œ(MR-image guided radiation therapy; MR-IGRT)์ด ์ง„๋‹จ ๋ฐ ์น˜๋ฃŒ ๋ฐฉ์‚ฌ์„ ๋ถ„์•ผ์— ๋“ฑ์žฅํ•˜์—ฌ ์ž„์ƒ์— ์‚ฌ์šฉ๋˜๊ณ  ์žˆ๋‹ค. PET/MR ์‹œ์Šคํ…œ์˜ ํ•œ ๊ฐ€์ง€ ์ฃผ์š” ๋ฌธ์ œ๋Š” PET ์ •๋Ÿ‰ํ™”๋ฅผ ์œ„ํ•œ MRI ์Šค์บ”์œผ๋กœ๋ถ€ํ„ฐ์˜ ๊ฐ์‡  ๋ณด์ •์ด๋ฉฐ, MR-IGRT ์‹œ์Šคํ…œ์—์„œ ์„ ๋Ÿ‰ ๊ณ„์‚ฐ์„ ์œ„ํ•ด MR ์˜์ƒ์— ์ „์ž ๋ฐ€๋„๋ฅผ ํ• ๋‹นํ•˜๋Š” ๊ฒƒ๊ณผ ๋น„์Šทํ•œ ํ•„์š”์„ฑ์„ ์ฐพ์„ ์ˆ˜ ์žˆ๋‹ค. ์ด๋Š” MR ์‹ ํ˜ธ๊ฐ€ ์ „์ž ๋ฐ€๋„๊ฐ€ ์•„๋‹Œ ์กฐ์ง์˜ ์–‘์„ฑ์ž ๋ฐ€๋„ ๋ฐ T1, T2 ์ด์™„ ํŠน์„ฑ๊ณผ ๊ด€๋ จ์ด ์žˆ๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ์ด ๋ฌธ์ œ๋ฅผ ๊ทน๋ณตํ•˜๊ธฐ ์œ„ํ•ด, MR ์ด๋ฏธ์ง€๋กœ๋ถ€ํ„ฐ ์œ ๋ž˜๋œ ๊ฐ€์ƒ์˜ CT์ธ ํ•ฉ์„ฑ CT๋ผ ๋ถˆ๋ฆฌ๋Š” ๋ฐฉ๋ฒ•์ด ์ œ์•ˆ๋˜์—ˆ๋‹ค. ๋ณธ ํ•™์œ„๋…ผ๋ฌธ์—์„œ๋Š” ํ•ฉ์„ฑ CT ์ƒ์„ฑ ๋ฐฉ๋ฒ• ๋ฐ ์ง„๋‹จ ๋ฐ ๋ฐฉ์‚ฌ์„  ์น˜๋ฃŒ์— ์ ์šฉ์„ ์œ„ํ•œ MR ์˜์ƒ ๊ธฐ๋ฐ˜ ํ•ฉ์„ฑ CT ์‚ฌ์šฉ์˜ ์ž„์ƒ์  ํƒ€๋‹น์„ฑ์„ ์กฐ์‚ฌํ•˜์˜€๋‹ค. ์ฒซ์งธ๋กœ, ๋‡Œ PET/MR๋ฅผ ์œ„ํ•œ ๋ ˆ๋ฒจ์…‹ ๋ถ„ํ• ์„ ์ด์šฉํ•œ MR ์ด๋ฏธ์ง€ ๊ธฐ๋ฐ˜ ๊ฐ์‡  ๋ณด์ • ๋ฐฉ๋ฒ•์„ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. MR ์ด๋ฏธ์ง€ ๊ธฐ๋ฐ˜ ๊ฐ์‡  ๋ณด์ •์˜ ๋ถ€์ •ํ™•์„ฑ์€ ์ •๋Ÿ‰ํ™” ์˜ค๋ฅ˜์™€ ๋‡Œ PET/MRI ์—ฐ๊ตฌ์—์„œ ๋ณ‘๋ณ€์˜ ์ž˜๋ชป๋œ ํŒ๋…์œผ๋กœ ์ด์–ด์ง„๋‹ค. ์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด, ์ž๊ธฐ์žฅ ๋ถˆ๊ท ์ผ ๋ณด์ •์„ ํฌํ•จํ•œ ๋‹ค์ƒ ๋ ˆ๋ฒจ์…‹ ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ๊ธฐ์ดˆํ•œ ๊ฐœ์„ ๋œ ์ดˆ๋‹จํŒŒ ์—์ฝ” ์‹œ๊ฐ„ MR-AC ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ๋˜ํ•œ CT-AC ๋ฐ PET/MRI ์Šค์บ๋„ˆ ์ œ์กฐ์—…์ฒด๊ฐ€ ์ œ๊ณตํ•œ MR-AC์™€ ๋น„๊ตํ•˜์—ฌ ๋ ˆ๋ฒจ์…‹ ๊ธฐ๋ฐ˜ MR-AC ๋ฐฉ๋ฒ•์˜ ์ž„์ƒ์  ์‚ฌ์šฉ๊ฐ€๋Šฅ์„ฑ์„ ํ‰๊ฐ€ํ•˜์˜€๋‹ค. ๋‘˜์งธ๋กœ, MR-IGRT ์‹œ์Šคํ…œ์„ ์œ„ํ•œ ์‹ฌ์ธต ์ปจ๋ณผ๋ฃจ์…˜ ์‹ ๊ฒฝ๋ง ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ €ํ•„๋“œ MR ์ด๋ฏธ์ง€์—์„œ ์ƒ์„ฑ๋œ ํ•ฉ์„ฑ CT ๋ฐฉ๋ฒ•๋ฅผ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ด ํ•ฉ์„ฑ CT ์ด๋ฏธ์ง€๋ฅผ ๋ณ€ํ˜• ์ •ํ•ฉ์„ ์‚ฌ์šฉํ•˜์—ฌ ์ƒ์„ฑ๋œ ๋ณ€ํ˜• CT์™€ ๋น„๊ต ํ•˜์˜€๋‹ค. ๋˜ํ•œ ๊ณจ๋ฐ˜, ํ‰๋ถ€ ๋ฐ ๋ณต๋ถ€ ํ™˜์ž์—์„œ์˜ ๊ธฐํ•˜ํ•™์ , ์„ ๋Ÿ‰์  ๋ถ„์„์„ ํ†ตํ•ด ๋ฐฉ์‚ฌ์„  ์น˜๋ฃŒ๊ณ„ํš์—์„œ์˜ ํ•ฉ์„ฑ CT๋ฅผ ์‚ฌ์šฉ๊ฐ€๋Šฅ์„ฑ์„ ํ‰๊ฐ€ํ•˜์˜€๋‹ค.Chapter 1. Introduction 1 1.1. Background 1 1.1.1. The Integration of MRI into Other Medical Devices 1 1.1.2. Chanllenges in the MRI Integrated System 4 1.1.3. Synthetic CT Generation 5 1.2. Purpose of Research 6 Chapter 2. MRI-based Attenuation Correction for PET/MRI 8 2.1. Background 8 2.2. Materials and Methods 10 2.2.1. Brain PET Dataset 19 2.2.2. MR-Based Attenuation Map using Level-Set Algorithm 12 2.2.3. Image Processing and Reconstruction 18 2.3. Results 20 2.4. Discussion 28 Chapter 3. MRI-based synthetic CT generation for MR-IGRT 30 3.1. Background 30 3.2. Materials and Methods 32 3.2.1. MR-dCT Paired DataSet 32 3.2.2. Synthetic CT Generation using 2D CNN 36 3.2.3. Data Analysis 38 3.3. Results 41 3.3.1. Image Comparison 41 3.3.2. Geometric Analysis 49 3.3.3. Dosimetric Analysis 49 3.4. Discussion 56 Chapter 4. Conclusions 59 Bibliography 60 Abstract in Korean (๊ตญ๋ฌธ ์ดˆ๋ก) 64Docto

    A generative adversarial network approach to synthetic-CT creation for MRI-based radiation therapy

    Get PDF
    Tese de mestrado integrado, Engenharia Biomรฉdica e Biofรญsica (Radiaรงรตes em Diagnรณstico e Terapia), Universidade de Lisboa, Faculdade de Ciรชncias, 2019This project presents the application of a generative adversarial network (GAN) to the creation of synthetic computed tomography (sCT) scans from volumetric T1-weighted magnetic resonance imaging (MRI), for dose calculation in MRI-based radio therapy workflows. A 3-dimensional GAN for MRI-to-CT synthesis was developed based on a 2-dimensional architecture for image-content transfer. Co-registered CT and T1 โ€“weighted MRI scans of the head region were used for training. Tuning of the network was performed with a 7-foldcross-validation method on 42 patients. A second data set of 12 patients was used as the hold out data set for final validation. The performance of the GAN was assessed with image quality metrics, and dosimetric evaluation was performed for 33 patients by comparing dose distributions calculated on true and synthetic CT, for photon and proton therapy plans. sCT generation time was <30s per patient. The mean absolute error (MAE) between sCT and CT on the cross-validation data set was69 ยฑ 10 HU, corresponding to a 20% decrease in error when compared to training on the original 2D GAN. Quality metric results did not differ statistically for the hold out data set (p = 0.09). Higher errors were observed for air and bone voxels, and registration errors between CT and MRI decreased performance of the algorithm. Dose deviations at the target were within 2% for the photon beams; for the proton plans, 21 patients showed dose deviations under 2%, while 12 had deviations between 2% and 8%. Pass rates (2%/ 2mm) between dose distributions were higher than 98% and 94% for photon and proton plans respectively. The results compare favorably with published algorithms and the method shows potential for MRI-guided clinical workflows. Special attention should be given when beams cross small structures and airways, and further adjustments to the algorithm should be made to increase performance for these regions
    • โ€ฆ
    corecore