129 research outputs found
Deep-Learning-Based Dose Predictor for Glioblastoma-Assessing the Sensitivity and Robustness for Dose Awareness in Contouring
External beam radiation therapy requires a sophisticated and laborious planning procedure. To improve the efficiency and quality of this procedure, machine-learning models that predict these dose distributions were introduced. The most recent dose prediction models are based on deep-learning architectures called 3D U-Nets that give good approximations of the dose in 3D almost instantly. Our purpose was to train such a 3D dose prediction model for glioblastoma VMAT treatment and test its robustness and sensitivity for the purpose of quality assurance of automatic contouring. From a cohort of 125 glioblastoma (GBM) patients, VMAT plans were created according to a clinical protocol. The initial model was trained on a cascaded 3D U-Net. A total of 60 cases were used for training, 15 for validation and 20 for testing. The prediction model was tested for sensitivity to dose changes when subject to realistic contour variations. Additionally, the model was tested for robustness by exposing it to a worst-case test set containing out-of-distribution cases. The initially trained prediction model had a dose score of 0.94 Gy and a mean DVH (dose volume histograms) score for all structures of 1.95 Gy. In terms of sensitivity, the model was able to predict the dose changes that occurred due to the contour variations with a mean error of 1.38 Gy. We obtained a 3D VMAT dose prediction model for GBM with limited data, providing good sensitivity to realistic contour variations. We tested and improved the model's robustness by targeted updates to the training set, making it a useful technique for introducing dose awareness in the contouring evaluation and quality assurance process
Deep-Learning-Based Dose Predictor for Glioblastoma–Assessing the Sensitivity and Robustness for Dose Awareness in Contouring
External beam radiation therapy requires a sophisticated and laborious planning procedure. To improve the efficiency and quality of this procedure, machine-learning models that predict these dose distributions were introduced. The most recent dose prediction models are based on deep-learning architectures called 3D U-Nets that give good approximations of the dose in 3D almost instantly. Our purpose was to train such a 3D dose prediction model for glioblastoma VMAT treatment and test its robustness and sensitivity for the purpose of quality assurance of automatic contouring. From a cohort of 125 glioblastoma (GBM) patients, VMAT plans were created according to a clinical protocol. The initial model was trained on a cascaded 3D U-Net. A total of 60 cases were used for training, 15 for validation and 20 for testing. The prediction model was tested for sensitivity to dose changes when subject to realistic contour variations. Additionally, the model was tested for robustness by exposing it to a worst-case test set containing out-of-distribution cases. The initially trained prediction model had a dose score of 0.94 Gy and a mean DVH (dose volume histograms) score for all structures of 1.95 Gy. In terms of sensitivity, the model was able to predict the dose changes that occurred due to the contour variations with a mean error of 1.38 Gy. We obtained a 3D VMAT dose prediction model for GBM with limited data, providing good sensitivity to realistic contour variations. We tested and improved the model’s robustness by targeted updates to the training set, making it a useful technique for introducing dose awareness in the contouring evaluation and quality assurance process
A Cascade Transformer-based Model for 3D Dose Distribution Prediction in Head and Neck Cancer Radiotherapy
Radiation therapy is the primary method used to treat cancer in the clinic.
Its goal is to deliver a precise dose to the planning target volume (PTV) while
protecting the surrounding organs at risk (OARs). However, the traditional
workflow used by dosimetrists to plan the treatment is time-consuming and
subjective, requiring iterative adjustments based on their experience. Deep
learning methods can be used to predict dose distribution maps to address these
limitations. The study proposes a cascade model for organs at risk segmentation
and dose distribution prediction. An encoder-decoder network has been developed
for the segmentation task, in which the encoder consists of transformer blocks,
and the decoder uses multi-scale convolutional blocks. Another cascade
encoder-decoder network has been proposed for dose distribution prediction
using a pyramid architecture. The proposed model has been evaluated using an
in-house head and neck cancer dataset of 96 patients and OpenKBP, a public head
and neck cancer dataset of 340 patients. The segmentation subnet achieved 0.79
and 2.71 for Dice and HD95 scores, respectively. This subnet outperformed the
existing baselines. The dose distribution prediction subnet outperformed the
winner of the OpenKBP2020 competition with 2.77 and 1.79 for dose and DVH
scores, respectively. The predicted dose maps showed good coincidence with
ground truth, with a superiority after linking with the auxiliary segmentation
task. The proposed model outperformed state-of-the-art methods, especially in
regions with low prescribed doses
Potentials and caveats of AI in Hybrid Imaging
State-of-the-art patient management frequently mandates the investigation of both anatomy and physiology of the patients. Hybrid imaging modalities such as the PET/MRI, PET/CT and SPECT/CT have the ability to provide both structural and functional information of the investigated tissues in a single examination. With the introduction of such advanced hardware fusion, new problems arise such as the exceedingly large amount of multi-modality data that requires novel approaches of how to extract a maximum of clinical information from large sets of multi-dimensional imaging data. Artificial intelligence (AI) has emerged as one of the leading technologies that has shown promise in facilitating highly integrative analysis of multi-parametric data. Specifically, the usefulness of AI algorithms in the medical imaging field has been heavily investigated in the realms of (1) image acquisition and reconstruction, (2) post-processing and (3) data mining and modelling. Here, we aim to provide an overview of the challenges encountered in hybrid imaging and discuss how AI algorithms can facilitate potential solutions. In addition, we highlight the pitfalls and challenges in using advanced AI algorithms in the context of hybrid imaging and provide suggestions for building robust AI solutions that enable reproducible and transparent research
Automatic Segmentation of Mandible from Conventional Methods to Deep Learning-A Review
Medical imaging techniques, such as (cone beam) computed tomography and magnetic resonance imaging, have proven to be a valuable component for oral and maxillofacial surgery (OMFS). Accurate segmentation of the mandible from head and neck (H&N) scans is an important step in order to build a personalized 3D digital mandible model for 3D printing and treatment planning of OMFS. Segmented mandible structures are used to effectively visualize the mandible volumes and to evaluate particular mandible properties quantitatively. However, mandible segmentation is always challenging for both clinicians and researchers, due to complex structures and higher attenuation materials, such as teeth (filling) or metal implants that easily lead to high noise and strong artifacts during scanning. Moreover, the size and shape of the mandible vary to a large extent between individuals. Therefore, mandible segmentation is a tedious and time-consuming task and requires adequate training to be performed properly. With the advancement of computer vision approaches, researchers have developed several algorithms to automatically segment the mandible during the last two decades. The objective of this review was to present the available fully (semi)automatic segmentation methods of the mandible published in different scientific articles. This review provides a vivid description of the scientific advancements to clinicians and researchers in this field to help develop novel automatic methods for clinical applications
A Subabdominal MRI Image Segmentation Algorithm Based on Multi-Scale Feature Pyramid Network and Dual Attention Mechanism
This study aimed to solve the semantic gap and misalignment issue between
encoding and decoding because of multiple convolutional and pooling operations
in U-Net when segmenting subabdominal MRI images during rectal cancer
treatment. A MRI Image Segmentation is proposed based on a multi-scale feature
pyramid network and dual attention mechanism. Our innovation is the design of
two modules: 1) a dilated convolution and multi-scale feature pyramid network
are used in the encoding to avoid the semantic gap. 2) a dual attention
mechanism is designed to maintain spatial information of U-Net and reduce
misalignment. Experiments on a subabdominal MRI image dataset show the proposed
method achieves better performance than others methods. In conclusion, a
multi-scale feature pyramid network can reduce the semantic gap, and the dual
attention mechanism can make an alignment of features between encoding and
decoding.Comment: 19 pages,9 figure
Artificial General Intelligence for Radiation Oncology
The emergence of artificial general intelligence (AGI) is transforming
radiation oncology. As prominent vanguards of AGI, large language models (LLMs)
such as GPT-4 and PaLM 2 can process extensive texts and large vision models
(LVMs) such as the Segment Anything Model (SAM) can process extensive imaging
data to enhance the efficiency and precision of radiation therapy. This paper
explores full-spectrum applications of AGI across radiation oncology including
initial consultation, simulation, treatment planning, treatment delivery,
treatment verification, and patient follow-up. The fusion of vision data with
LLMs also creates powerful multimodal models that elucidate nuanced clinical
patterns. Together, AGI promises to catalyze a shift towards data-driven,
personalized radiation therapy. However, these models should complement human
expertise and care. This paper provides an overview of how AGI can transform
radiation oncology to elevate the standard of patient care in radiation
oncology, with the key insight being AGI's ability to exploit multimodal
clinical data at scale
Focused Decoding Enables 3D Anatomical Detection by Transformers
Detection Transformers represent end-to-end object detection approaches based
on a Transformer encoder-decoder architecture, exploiting the attention
mechanism for global relation modeling. Although Detection Transformers deliver
results on par with or even superior to their highly optimized CNN-based
counterparts operating on 2D natural images, their success is closely coupled
to access to a vast amount of training data. This, however, restricts the
feasibility of employing Detection Transformers in the medical domain, as
access to annotated data is typically limited. To tackle this issue and
facilitate the advent of medical Detection Transformers, we propose a novel
Detection Transformer for 3D anatomical structure detection, dubbed Focused
Decoder. Focused Decoder leverages information from an anatomical region atlas
to simultaneously deploy query anchors and restrict the cross-attention's field
of view to regions of interest, which allows for a precise focus on relevant
anatomical structures. We evaluate our proposed approach on two publicly
available CT datasets and demonstrate that Focused Decoder not only provides
strong detection results and thus alleviates the need for a vast amount of
annotated data but also exhibits exceptional and highly intuitive
explainability of results via attention weights. Our code is available at
https://github.com/bwittmann/transoar.Comment: Accepted for publication at the Journal of Machine Learning for
Biomedical Imaging (MELBA) https://melba-journal.org/2023:00
- …