7 research outputs found

    Fully-automated, CT-only GTV contouring for palliative head and neck radiotherapy.

    Get PDF
    Planning for palliative radiotherapy is performed without the advantage of MR or PET imaging in many clinics. Here, we investigated CT-only GTV delineation for palliative treatment of head and neck cancer. Two multi-institutional datasets of palliative-intent treatment plans were retrospectively acquired: a set of 102 non-contrast-enhanced CTs and a set of 96 contrast-enhanced CTs. The nnU-Net auto-segmentation network was chosen for its strength in medical image segmentation, and five approaches separately trained: (1) heuristic-cropped, non-contrast images with a single GTV channel, (2) cropping around a manually-placed point in the tumor center for non-contrast images with a single GTV channel, (3) contrast-enhanced images with a single GTV channel, (4) contrast-enhanced images with separate primary and nodal GTV channels, and (5) contrast-enhanced images along with synthetic MR images with separate primary and nodal GTV channels. Median Dice similarity coefficient ranged from 0.6 to 0.7, surface Dice from 0.30 to 0.56, and 95th Hausdorff distance from 14.7 to 19.7 mm across the five approaches. Only surface Dice exhibited statistically-significant difference across these five approaches using a two-tailed Wilcoxon Rank-Sum test (p ≤ 0.05). Our CT-only results met or exceeded published values for head and neck GTV autocontouring using multi-modality images. However, significant edits would be necessary before clinical use in palliative radiotherapy

    Auto-segmentation in Pancreatic and Liver Radiation Therapy

    No full text
    Background Gastrointestinal cancers exhibit a high mortality rate compared to other cancer types. Among these, pancreatic cancer ranks as the fourth leading cause of cancer-related deaths worldwide. The five-year survival rate remains alarmingly low at a mere 9%. Hepatocellular carcinoma (HCC), another aggressive form of cancer, is rapidly becoming the primary cause of cancer-related deaths in the United States. The treatment of both liver cancer and pancreatic cancer heavily relies on a multidisciplinary approach. Innovative treatment strategies involving dose-escalated regimens, such as stereotactic body radiation therapy (SBRT), are emerging as an important pillar of the management of liver and pancreatic cancer. The success of these treatment modalities hinges upon the precise and standardized segmentation of organs-at-risk and target volumes to ensure the optimal quality of treatment plans. Methods We first developed an automated organs-at-risk segmentation tool for upper abdominal radiation therapy treatment. A dataset of 70 patients was collected and utilized as the training set and benchmark for our auto-segmentation tool. We employed the adaptive nnU-Net architecture to develop a model ensemble capable of contouring various organs, including the duodenum, small bowel (ileum and jejunum), large bowel, liver, spleen, kidneys, and spinal cord. The performance of the segmentation tool was evaluated on 75 patients using both contrast-enhanced and non-contrast-enhanced CT images, employing a five-point Likert scale assessment by five experts from three different institutions. To capture contours requiring major edits, we developed a distance-based quality assurance (QA) system. This system identified CT scans that were likely to yield suboptimal contours requiring time-consuming major edits. Evaluation of the QA system was conducted on clinical CT scans, with the clinical review score serving as the ground truth. For target volume segmentation, we employed transformer-based architectures, leveraging self-supervised learning and uncertainty estimation techniques to enhance performance and allow for stylistic customization. A total of 3094 unlabeled CT scans from liver cancer patients, along with 5050 publicly available CT scans, were collected for self-supervised pretraining in liver tumor segmentation. The pretrained encoders were then utilized to optimize downstream liver tumor segmentation models, evaluating the impact of self-supervised learning on tumor segmentation performance. For pancreatic tumor segmentation, we developed an ensemble-based approach incorporating multiple segmentation styles. Probability thresholding was employed to generate the final segmentation, enabling customization according to clinicians\u27 preferences. Results Our organs-at-risk segmentation tool achieved a clinical acceptance rate of over 90% for all organs except the duodenum, demonstrating its accuracy in delineation. Quantitative results were comparable to state-of-the-art methods, using a small but high-quality dataset. The QA system achieved an AUC of 0.89 for capturing contours requiring major edits on randomly sampled clinical CT scans. In liver tumor segmentation, our study revealed that self-supervised learning demonstrated 4-5% performance improvement when diverse unlabeled data were used for pretraining. This finding highlights the importance of incorporating a wide range of data during the pretraining stage. For pancreatic tumor segmentation, our ensemble-based segmentation method proved highly effective. It provided pixel-by-pixel uncertainty estimates and allowed customization through probability thresholding. Our customized contours surpassed the performance of the state-of-the-art segmentation model, even when utilizing identical training data, pretraining techniques, and hyperparameters. Conclusion Our auto-segmentation system for organs-at-risk achieved high clinical acceptance rates in upper-abdominal radiation treatment. The accompanying QA tool effectively captured contours requiring major edits. Leveraging a wide range of unlabeled data in self-supervised learning improved the performance of our transformer-based segmentation system. Additionally, our uncertainty-guided segmentation network allowed customization and identification of low-confidence regions. Our suite of auto-segmentation tools for pancreatic and liver cancer radiation treatment has the potential to streamline clinical workflows while prioritizing patient safety

    Fully automated deep learning based auto-contouring of liver segments and spleen on contrast-enhanced CT images

    No full text
    Abstract Manual delineation of liver segments on computed tomography (CT) images for primary/secondary liver cancer (LC) patients is time-intensive and prone to inter/intra-observer variability. Therefore, we developed a deep-learning-based model to auto-contour liver segments and spleen on contrast-enhanced CT (CECT) images. We trained two models using 3d patch-based attention U-Net ( MpaU−Net){{\text{M}}}_{{\text{paU}}-{\text{Net}}}) M paU - Net ) and 3d full resolution of nnU-Net ( MnnU−Net){{\text{M}}}_{{\text{nnU}}-{\text{Net}}}) M nnU - Net ) to determine the best architecture ( BA){\text{BA}}) BA ) . BA was used with vessels ( MVess){{\text{M}}}_{{\text{Vess}}}) M Vess ) and spleen ( Mseg+spleen){{\text{M}}}_{{\text{seg}}+{\text{spleen}}}) M seg + spleen ) to assess the impact on segment contouring. Models were trained, validated, and tested on 160 ( CRTTrain{{\text{C}}}_{{\text{RTTrain}}} C RTTrain ), 40 ( CRTVal{{\text{C}}}_{{\text{RTVal}}} C RTVal ), 33 ( CLS{{\text{C}}}_{{\text{LS}}} C LS ), 25 (CCH) and 20 (CPVE) CECT of LC patients. MnnU−Net{{\text{M}}}_{{\text{nnU}}-{\text{Net}}} M nnU - Net outperformed MpaU−Net{{\text{M}}}_{{\text{paU}}-{\text{Net}}} M paU - Net across all segments with median differences in Dice similarity coefficients (DSC) ranging 0.03–0.05 (p  0.05), however, both were slightly better than MVess{{\text{M}}}_{{\text{Vess}}} M Vess by DSC up to 0.02. The final model, Mseg+spleen{{\text{M}}}_{{\text{seg}}+{\text{spleen}}} M seg + spleen , showed a mean DSC of 0.89, 0.82, 0.88, 0.87, 0.96, and 0.95 for segments 1, 2, 3, 4, 5–8, and spleen, respectively on entire test sets. Qualitatively, more than 85% of cases showed a Likert score ≥\ge ≥ 3 on test sets. Our final model provides clinically acceptable contours of liver segments and spleen which are usable in treatment planning

    Synthetic Megavoltage Cone Beam Computed Tomography Image Generation for Improved Contouring Accuracy of Cardiac Pacemakers

    No full text
    In this study, we aimed to enhance the contouring accuracy of cardiac pacemakers by improving their visualization using deep learning models to predict MV CBCT images based on kV CT or CBCT images. Ten pacemakers and four thorax phantoms were included, creating a total of 35 combinations. Each combination was imaged on a Varian Halcyon (kV/MV CBCT images) and Siemens SOMATOM CT scanner (kV CT images). Two generative adversarial network (GAN)-based models, cycleGAN and conditional GAN (cGAN), were trained to generate synthetic MV (sMV) CBCT images from kV CT/CBCT images using twenty-eight datasets (80%). The pacemakers in the sMV CBCT images and original MV CBCT images were manually delineated and reviewed by three users. The Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), and mean surface distance (MSD) were used to compare contour accuracy. Visual inspection showed the improved visualization of pacemakers on sMV CBCT images compared to original kV CT/CBCT images. Moreover, cGAN demonstrated superior performance in enhancing pacemaker visualization compared to cycleGAN. The mean DSC, HD95, and MSD for contours on sMV CBCT images generated from kV CT/CBCT images were 0.91 ± 0.02/0.92 ± 0.01, 1.38 ± 0.31 mm/1.18 ± 0.20 mm, and 0.42 ± 0.07 mm/0.36 ± 0.06 mm using the cGAN model. Deep learning-based methods, specifically cycleGAN and cGAN, can effectively enhance the visualization of pacemakers in thorax kV CT/CBCT images, therefore improving the contouring precision of these devices

    Multi-organ segmentation of abdominal structures from non-contrast and contrast enhanced CT images

    No full text
    Manually delineating upper abdominal organs at risk (OARs) is a time-consuming task. To develop a deep-learning-based tool for accurate and robust auto-segmentation of these OARs, forty pancreatic cancer patients with contrast-enhanced breath-hold computed tomographic (CT) images were selected. We trained a three-dimensional (3D) U-Net ensemble that automatically segments all organ contours concurrently with the self-configuring nnU-Net framework. Our tool's performance was assessed on a held-out test set of 30 patients quantitatively. Five radiation oncologists from three different institutions assessed the performance of the tool using a 5-point Likert scale on an additional 75 randomly selected test patients. The mean (± std. dev.) Dice similarity coefficient values between the automatic segmentation and the ground truth on contrast-enhanced CT images were 0.80 ± 0.08, 0.89 ± 0.05, 0.90 ± 0.06, 0.92 ± 0.03, 0.96 ± 0.01, 0.97 ± 0.01, 0.96 ± 0.01, and 0.96 ± 0.01 for the duodenum, small bowel, large bowel, stomach, liver, spleen, right kidney, and left kidney, respectively. 89.3% (contrast-enhanced) and 85.3% (non-contrast-enhanced) of duodenum contours were scored as a 3 or above, which required only minor edits. More than 90% of the other organs' contours were scored as a 3 or above. Our tool achieved a high level of clinical acceptability with a small training dataset and provides accurate contours for treatment planning

    Automated Contouring and Planning in Radiation Therapy: What Is ‘Clinically Acceptable’?

    No full text
    Developers and users of artificial-intelligence-based tools for automatic contouring and treatment planning in radiotherapy are expected to assess clinical acceptability of these tools. However, what is ‘clinical acceptability’? Quantitative and qualitative approaches have been used to assess this ill-defined concept, all of which have advantages and disadvantages or limitations. The approach chosen may depend on the goal of the study as well as on available resources. In this paper, we discuss various aspects of ‘clinical acceptability’ and how they can move us toward a standard for defining clinical acceptability of new autocontouring and planning tools
    corecore