794 research outputs found
Machine Learning Models to automate Radiotherapy Structure Name Standardization
Structure name standardization is a critical problem in Radiotherapy planning systems to correctly identify the various Organs-at-Risk, Planning Target Volumes and `Other\u27 organs for monitoring present and future medications. Physicians often label anatomical structure sets in Digital Imaging and Communications in Medicine (DICOM) images with nonstandard random names. Hence, the standardization of these names for the Organs at Risk (OARs), Planning Target Volumes (PTVs), and `Other\u27 organs is a vital problem. Prior works considered traditional machine learning approaches on structure sets with moderate success. We compare both traditional methods and deep neural network-based approaches on the multimodal vision-language prostate cancer patient data, compiled from the radiotherapy centers of the US Veterans Health Administration (VHA) and Virginia Commonwealth University (VCU) for structure name standardization. These de-identified data comprise 16,290 prostate structures. Our method integrates the multimodal textual and imaging data with Convolutional Neural Network (CNN)-based deep learning approaches such as CNN, Visual Geometry Group (VGG) network, and Residual Network (ResNet) and shows improved results in prostate radiotherapy structure name standardization. Our proposed deep neural network-based approach on the multimodal vision-language prostate cancer patient data provides state-of-the-art results for structure name standardization. Evaluation with macro-averaged F1 score shows that our CNN model with single-modal textual data usually performs better than previous studies. We also experimented with various combinations of multimodal data (masked images, masked dose) besides textual data. The models perform well on textual data alone, while the addition of imaging data shows that deep neural networks achieve better performance using information present in other modalities. Our pipeline can successfully standardize the Organs-at-Risk and the Planning Target Volumes, which are of utmost interest to the clinicians and simultaneously, performs very well on the `Other\u27 organs. We performed comprehensive experiments by varying input data modalities to show that using masked images and masked dose data with text outperforms the combination of other input modalities. We also undersampled the majority class, i.e., the `Other\u27 class, at different degrees and conducted extensive experiments to demonstrate that a small amount of majority class undersampling is essential for superior performance. Overall, our proposed integrated, deep neural network-based architecture for prostate structure name standardization can solve several challenges associated with multimodal data. The VGG network on the masked image-dose data combined with CNNs on the text data performs the best and presents the state-of-the-art in this domain
Artificial General Intelligence for Radiation Oncology
The emergence of artificial general intelligence (AGI) is transforming
radiation oncology. As prominent vanguards of AGI, large language models (LLMs)
such as GPT-4 and PaLM 2 can process extensive texts and large vision models
(LVMs) such as the Segment Anything Model (SAM) can process extensive imaging
data to enhance the efficiency and precision of radiation therapy. This paper
explores full-spectrum applications of AGI across radiation oncology including
initial consultation, simulation, treatment planning, treatment delivery,
treatment verification, and patient follow-up. The fusion of vision data with
LLMs also creates powerful multimodal models that elucidate nuanced clinical
patterns. Together, AGI promises to catalyze a shift towards data-driven,
personalized radiation therapy. However, these models should complement human
expertise and care. This paper provides an overview of how AGI can transform
radiation oncology to elevate the standard of patient care in radiation
oncology, with the key insight being AGI's ability to exploit multimodal
clinical data at scale
EQUIVOX: an example of adaptation using an artificial neural network on a case-based reasoning platform
International audienceIn case of a radiological emergency situation involving accidental human exposure, a dosimetry evaluation must be established as soon as possible. In most cases, this evaluation is based on numerical representations and models of victims. Unfortunately, personalised and realistic human representations are often unavailable for the exposed subjects. However, accuracy of treatment depends on the similarity of the phantom to the victim. The EquiVox platform (Research of Equivalent Voxel phantom) developed in this study uses Case-Based Reasoning (CBR) principles to retrieve and adapt, from among a set of existing phantoms, the one to represent the victim. This paper introduces the EquiVox platform and the Artificial Neural Network (ANN) developed to interpolate the victim's 3D lung contours. The results obtained for the choice and construction of the contours are presented and discussed
Incorporating Deep Learning Techniques into Outcome Modeling in Non-Small Cell Lung Cancer Patients after Radiation Therapy
Radiation therapy (radiotherapy) together with surgery, chemotherapy, and immunotherapy are common modalities in cancer treatment. In radiotherapy, patients are given high doses of ionizing radiation which is aimed at killing cancer cells and shrinking tumors. Conventional radiotherapy usually gives a standard prescription to all the patients, however, as patients are likely to have heterogeneous responses to the treatment due to multiple prognostic factors, personalization of radiotherapy treatment is desirable. Outcome models can serve as clinical decision-making support tools in the personalized treatment, helping evaluate patients’ treatment options before the treatment or during fractionated treatment. It can further provide insights into designing of new clinical protocols. In the outcome modeling, two indices including tumor control probability (TCP) and normal tissue complication probability (NTCP) are usually investigated.
Current outcome models, e.g., analytical models and data-driven models, either fail to take into account complex interactions between physical and biological variables or require complicated feature selection procedures. Therefore, in our studies, deep learning (DL) techniques are incorporated into outcome modeling for prediction of local control (LC), which is TCP in our case, and radiation pneumonitis (RP), which is NTCP in our case, in non-small-cell lung cancer (NSCLC) patients after radiotherapy. These techniques can improve the prediction performance of outcomes and simplify model development procedures. Additionally, longitudinal data association, actuarial prediction, and multi-endpoints prediction are considered in our models. These were carried out in 3 consecutive studies.
In the first study, a composite architecture consisting of variational auto-encoder (VAE) and multi-layer perceptron (MLP) was investigated and applied to RP prediction. The architecture enabled the simultaneous dimensionality reduction and prediction. The novel VAE-MLP joint architecture with area under receiver operative characteristics (ROC) curve (AUC) [95% CIs] 0.781 [0.737-0.808] outperformed a strategy which involves separate VAEs and classifiers (AUC 0.624 [ 0.577-0.658]).
In the second study, composite architectures consisted of 1D convolutional layer/ locally-connected layer and MLP that took into account longitudinal associations were applied to predict LC. Composite architectures convolutional neural network (CNN)-MLP that can model both longitudinal and non-longitudinal data yielded an AUC 0.832 [ 0.807-0.841]. While plain MLP only yielded an AUC 0.785 [CI: 0.752-0.792] in LC control prediction.
In the third study, rather than binary classification, time-to-event information was also incorporated for actuarial prediction. DL architectures ADNN-DVH which consider dosimetric information, ADNN-com which further combined biological and imaging data, and ADNN-com-joint which realized multi-endpoints prediction were investigated. Analytical models were also conducted for comparison purposes. Among all the models, ADNN-com-joint performed the best, yielding c-indexes of 0.705 [0.676-0.734] for RP2, 0.740 [0.714-0.765] for LC and an AU-FROC 0.720 [0.671-0.801] for joint prediction. The performance of proposed models was also tested on a cohort of newly-treated patients and multi-institutional RTOG0617 datasets.
These studies taken together indicate that DL techniques can be utilized to improve the performance of outcome models and potentially provide guidance to physicians during decision making. Specifically, a VAE-MLP joint architectures can realize simultaneous dimensionality reduction and prediction, boosting the performance of conventional outcome models. A 1D CNN-MLP joint architecture can utilize temporal-associated variables generated during the span of radiotherapy. A DL model ADNN-com-joint can realize multi-endpoint prediction, which allows considering competing risk factors. All of those contribute to a step toward enabling outcome models as real clinical decision support tools.PHDApplied PhysicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/162923/1/sunan_1.pd
Recommended from our members
Accelerating Radiation Dose Calculation with High Performance Computing and Machine Learning for Large-scale Radiotherapy Treatment Planning
Radiation therapy is powered by modern techniques in precise planning and executionof radiation delivery, which are being rapidly improved to maximize its benefit to cancerpatients. In the last decade, radiotherapy experienced the introduction of advanced methodsfor automatic beam orientation optimization, real-time tumor tracking, daily planadaptation, and many others, which improve the radiation delivery precision, planning easeand reproducibility, and treatment efficacy. However, such advanced paradigms necessitatethe calculation of orders of magnitude more causal dose deposition data, increasing the timerequirement of all pre-planning dose calculation. Principles of high-performance computingand machine learning were applied to address the insufficient speeds of widely-used dosecalculation algorithms to facilitate translation of these advanced treatment paradigms intoclinical practice.To accelerate CT-guided X-ray therapies, Collapsed-Cone Convolution-Superposition(CCCS), a state-of-the-art analytical dose calculation algorithm, was accelerated through itsnovel implementation on highly parallelized GPUs. This context-based GPU-CCCS approachtakes advantage of X-ray dose deposition compactness to parallelize calculation acrosshundreds of beamlets, reducing hardware-specific overheads, and enabling acceleration bytwo to three orders of magnitude compared to existing GPU-based beamlet-by-beamletapproaches. Near-linear increases in acceleration are achieved with a distributed, multi-GPUimplementation of context-based GPU-CCCS.Dose calculation for MR-guided treatment is complicated by electron return effects(EREs), exhibited by ionizing electrons in the strong magnetic field of the MRI scanner. EREsnecessitate the use of much slower Monte Carlo (MC) dose calculation, limiting the clinicalapplication of advanced treatment paradigms due to time restrictions. An automaticallydistributed framework for very-large-scale MC dose calculation was developed, grantinglinear scaling of dose calculation speed with the number of utilized computational cores. Itwas then harnessed to efficiently generate a large dataset of paired high- and low-noise MCdoses in a 1.5 tesla magnetic field, which were used to train a novel deep convolutionalneural network (CNN), DeepMC, to predict low-noise dose from faster high-noise MC-simulation. DeepMC enables 38-fold acceleration of MR-guided X-ray beamlet dosecalculation, while remaining synergistic with existing MC acceleration techniques to achievemultiplicative speed improvements.This work redefines the expectation of X-ray dose calculation speed, making it possibleto apply new highly-beneficial treatment paradigms to standard clinical practice for the firsttime
Methodology for complex dataflow application development
This thesis addresses problems inherent to the development of complex applications for reconfig- urable systems. Many projects fail to complete or take much longer than originally estimated by relying on traditional iterative software development processes typically used with conventional computers. Even though designer productivity can be increased by abstract programming and execution models, e.g., dataflow, development methodologies considering the specific properties of reconfigurable systems do not exist.
The first contribution of this thesis is a design methodology to facilitate systematic develop- ment of complex applications using reconfigurable hardware in the context of High-Performance Computing (HPC). The proposed methodology is built upon a careful analysis of the original application, a software model of the intended hardware system, an analytical prediction of performance and on-chip area usage, and an iterative architectural refinement to resolve identi- fied bottlenecks before writing a single line of code targeting the reconfigurable hardware. It is successfully validated using two real applications and both achieve state-of-the-art performance.
The second contribution extends this methodology to provide portability between devices in two steps. First, additional tool support for contemporary multi-die Field-Programmable Gate Arrays (FPGAs) is developed. An algorithm to automatically map logical memories to hetero- geneous physical memories with special attention to die boundaries is proposed. As a result, only the proposed algorithm managed to successfully place and route all designs used in the evaluation while the second-best algorithm failed on one third of all large applications. Second, best practices for performance portability between different FPGA devices are collected and evaluated on a financial use case, showing efficient resource usage on five different platforms.
The third contribution applies the extended methodology to a real, highly demanding emerging application from the radiotherapy domain. A Monte-Carlo based simulation of dose accumu- lation in human tissue is accelerated using the proposed methodology to meet the real time requirements of adaptive radiotherapy.Open Acces
Accelerated respiratory-resolved 4D-MRI with separable spatio-temporal neural networks
Background: Respiratory-resolved four-dimensional magnetic resonance imaging
(4D-MRI) provides essential motion information for accurate radiation
treatments of mobile tumors. However, obtaining high-quality 4D-MRI suffers
from long acquisition and reconstruction times.
Purpose: To develop a deep learning architecture to quickly acquire and
reconstruct high-quality 4D-MRI, enabling accurate motion quantification for
MRI-guided radiotherapy.
Methods: A small convolutional neural network called MODEST is proposed to
reconstruct 4D-MRI by performing a spatial and temporal decomposition, omitting
the need for 4D convolutions to use all the spatio-temporal information present
in 4D-MRI. This network is trained on undersampled 4D-MRI after respiratory
binning to reconstruct high-quality 4D-MRI obtained by compressed sensing
reconstruction. The network is trained, validated, and tested on 4D-MRI of 28
lung cancer patients acquired with a T1-weighted golden-angle radial
stack-of-stars sequence. The 4D-MRI of 18, 5, and 5 patients were used for
training, validation, and testing. Network performances are evaluated on image
quality measured by the structural similarity index (SSIM) and motion
consistency by comparing the position of the lung-liver interface on
undersampled 4D-MRI before and after respiratory binning. The network is
compared to conventional architectures such as a U-Net, which has 30 times more
trainable parameters.
Results: MODEST can reconstruct high-quality 4D-MRI with higher image quality
than a U-Net, despite a thirty-fold reduction in trainable parameters.
High-quality 4D-MRI can be obtained using MODEST in approximately 2.5 minutes,
including acquisition, processing, and reconstruction.
Conclusion: High-quality accelerated 4D-MRI can be obtained using MODEST,
which is particularly interesting for MRI-guided radiotherapy.Comment: Code available at https://gitlab.com/computational-imaging-lab/modes
- …