761 research outputs found

    Semi-supervised learning towards automated segmentation of PET images with limited annotations: Application to lymphoma patients

    Full text link
    The time-consuming task of manual segmentation challenges routine systematic quantification of disease burden. Convolutional neural networks (CNNs) hold significant promise to reliably identify locations and boundaries of tumors from PET scans. We aimed to leverage the need for annotated data via semi-supervised approaches, with application to PET images of diffuse large B-cell lymphoma (DLBCL) and primary mediastinal large B-cell lymphoma (PMBCL). We analyzed 18F-FDG PET images of 292 patients with PMBCL (n=104) and DLBCL (n=188) (n=232 for training and validation, and n=60 for external testing). We employed FCM and MS losses for training a 3D U-Net with different levels of supervision: i) fully supervised methods with labeled FCM (LFCM) as well as Unified focal and Dice loss functions, ii) unsupervised methods with Robust FCM (RFCM) and Mumford-Shah (MS) loss functions, and iii) Semi-supervised methods based on FCM (RFCM+LFCM), as well as MS loss in combination with supervised Dice loss (MS+Dice). Unified loss function yielded higher Dice score (mean +/- standard deviation (SD)) (0.73 +/- 0.03; 95% CI, 0.67-0.8) compared to Dice loss (p-value<0.01). Semi-supervised (RFCM+alpha*LFCM) with alpha=0.3 showed the best performance, with a Dice score of 0.69 +/- 0.03 (95% CI, 0.45-0.77) outperforming (MS+alpha*Dice) for any supervision level (any alpha) (p<0.01). The best performer among (MS+alpha*Dice) semi-supervised approaches with alpha=0.2 showed a Dice score of 0.60 +/- 0.08 (95% CI, 0.44-0.76) compared to another supervision level in this semi-supervised approach (p<0.01). Semi-supervised learning via FCM loss (RFCM+alpha*LFCM) showed improved performance compared to supervised approaches. Considering the time-consuming nature of expert manual delineations and intra-observer variabilities, semi-supervised approaches have significant potential for automated segmentation workflows

    Image Quality and Activity Optimization in Oncologic F-18-FDG PET Using the Digital Biograph Vision PET/CT System

    Get PDF
    The first Biograph Vision PET/CT system (Siemens Healthineers) was installed at the University Medical Center Groningen. Improved performance of this system could allow for a reduction in activity administration or scan duration. This study evaluated the effects of reduced scan duration in oncologic 18F-FDG PET imaging on quantitative and subjective imaging parameters and its influence on clinical image interpretation. Methods: Patients referred for a clinical PET/CT scan were enrolled in this study, received a weight-based 18F-FDG injected activity, and underwent list-mode PET acquisition at 180 s per bed position (s/bp). Acquired PET data were reconstructed using the vendor-recommended clinical reconstruction protocol (hereafter referred to as "clinical"), using the clinical protocol with additional 2-mm gaussian filtering (hereafter referred to as "clinical+G2"), and-in conformance with European Association of Nuclear Medicine Research Ltd. (EARL) specifications-using different scan durations per bed position (180, 120, 60, 30, and 10 s). Reconstructed images were quantitatively assessed for comparison of SUVs and noise. In addition, clinically reconstructed images were qualitatively evaluated by 3 nuclear medicine physicians. Results: In total, 30 oncologic patients (22 men, 8 women; age: 48-88 y [range], 67 ± 9.6 y [mean ± SD]) received a single weight-based (3 MBq/kg) 18F-FDG injected activity (weight: 45-123 kg [range], 81 ± 15 kg [mean ± SD]; activity: 135-380 MBq [range], 241 ± 47.3 MBq [mean ± SD]). Significant differences in lesion SUVmax were found between the 180-s/bp images and the 30- and 10-s/bp images reconstructed using the clinical protocols, whereas no differences were found in lesion SUVpeak EARL-compliant images did not show differences in lesion SUVmax or SUVpeak between scan durations. Quantitative parameters showed minimal deviation (∼5%) in the 60-s/bp images. Therefore, further subjective image quality assessment was conducted using the 60-s/bp images. Qualitative assessment revealed the influence of personal preference on physicians' willingness to adopt the 60-s/bp images in clinical practice. Although quantitative PET parameters differed minimally, an increase in noise was observed. Conclusion: With the Biograph Vision PET/CT system for oncologic 18F-FDG imaging, scan duration or activity administration could be reduced by a factor of 3 or more with the use of the clinical+G2 or the EARL-compliant reconstruction protocol

    An artificial intelligence method using FDG PET to predict treatment outcome in diffuse large B cell lymphoma patients

    Get PDF
    Convolutional neural networks (CNNs) may improve response prediction in diffuse large B-cell lymphoma (DLBCL). The aim of this study was to investigate the feasibility of a CNN using maximum intensity projection (MIP) images from 18F-fluorodeoxyglucose (18F-FDG) positron emission tomography (PET) baseline scans to predict the probability of time-to-progression (TTP) within 2&nbsp;years and compare it with the International Prognostic Index (IPI), i.e. a clinically used score. 296 DLBCL 18F-FDG PET/CT baseline scans collected from a prospective clinical trial (HOVON-84) were analysed. Cross-validation was performed using coronal and sagittal MIPs. An external dataset (340 DLBCL patients) was used to validate the model. Association between the probabilities, metabolic tumour volume and Dmaxbulk was assessed. Probabilities for PET scans with synthetically removed tumors were also assessed. The CNN provided a 2-year TTP prediction with an&nbsp;area under the curve (AUC) of 0.74, outperforming the IPI-based model (AUC = 0.68). Furthermore, high probabilities (&gt; 0.6) of the original MIPs were considerably decreased after removing the tumours (&lt; 0.4, generally). These findings suggest that MIP-based CNNs are able to predict treatment outcome in DLBCL

    Semi-automated 18F-FDG PET segmentation methods for tumor volume determination in Non-Hodgkin lymphoma patients:a literature review, implementation and multi-threshold evaluation

    Get PDF
    In the treatment of Non-Hodgkin lymphoma (NHL), multiple therapeutic options are available. Improving outcome predictions are essential to optimize treatment. The metabolic active tumor volume (MATV) has shown to be a prognostic factor in NHL. It is usually retrieved using semi-automated thresholding methods based on standardized uptake values (SUV), calculated from 18F-Fluorodeoxyglucose Positron Emission Tomography (18F-FDG PET) images. However, there is currently no consensus method for NHL. The aim of this study was to review literature on different segmentation methods used, and to evaluate selected methods by using an in house created software tool. A software tool, MUltiple SUV Threshold (MUST)-segmenter was developed where tumor locations are identified by placing seed-points on the PET images, followed by subsequent region growing. Based on a literature review, 9 SUV thresholding methods were selected and MATVs were extracted. The MUST-segmenter was utilized in a cohort of 68 patients with NHL. Differences in MATVs were assessed with paired t-tests, and correlations and distributions figures. High variability and significant differences between the MATVs based on different segmentation methods (p < 0.05) were observed in the NHL patients. Median MATVs ranged from 35 to 211 cc. No consensus for determining MATV is available based on the literature. Using the MUST-segmenter with 9 selected SUV thresholding methods, we demonstrated a large and significant variation in MATVs. Identifying the most optimal segmentation method for patients with NHL is essential to further improve predictions of toxicity, response, and treatment outcomes, which can be facilitated by the MUST-segmenter

    Clinically feasible semi-automatic workflows for measuring metabolically active tumour volume in metastatic melanoma

    Get PDF
    PURPOSE: Metabolically active tumour volume (MATV) is a potential quantitative positron emission tomography (PET) imaging biomarker in melanoma. Accumulating data indicate that low MATV may predict increased chance of response to immunotherapy and overall survival. However, metastatic melanoma can present with numerous (small) tumour lesions, making manual tumour segmentation time-consuming. The aim of this study was to evaluate multiple semi-automatic segmentation workflows to determine reliability and reproducibility of MATV measurements in patients with metastatic melanoma. METHODS: An existing cohort of 64 adult patients with histologically proven metastatic melanoma was used in this study. 18F-FDG PET/CT diagnostic baseline images were acquired using a European Association of Nuclear Medicine (EANM) Research Limited-accredited Siemens Biograph mCT PET/CT system (Siemens Healthineers, Knoxville, USA). PET data were analysed using manual, gradient-based segmentation and five different semi-automatic methods: three direct PET image-derived delineations (41MAX, A50P and SUV40) and two based on a majority-vote approach (MV2 and MV3), without and with (suffix '+') manual lesion addition. Correlation between the different segmentation methods and their respective associations with overall survival was assessed. RESULTS: Correlation between the MATVs derived by the manual segmentation and semi-automated tumour segmentations ranged from R2 = 0.41 for A50P to R2 = 0.85 for SUV40+ and MV2+, respectively. Manual MATV segmentation did not differ significantly from the semi-automatic methods SUV40 (∆MATV mean ± SD 0.08 ± 0.60 mL, P = 0.303), SUV40+ (∆MATV - 0.10 ± 0.51 mL, P = 0.126), MV2+ (∆MATV - 0.09 ± 0.62 mL, P = 0.252) and MV3+ (∆MATV - 0.03 ± 0.55 mL, P = 0.615). Log-rank tests showed statistically significant overall survival differences between above and below median MATV patients for all segmentation methods with areas under the ROC curves of 0.806 for manual segmentation and between 0.756 [41MAX] and 0.807 [MV3+] for semi-automatic segmentations. CONCLUSIONS: Simple and fast semi-automated FDG PET segmentation workflows yield accurate and reproducible MATV measurements that correlate well with manual segmentation in metastatic melanoma. The most readily applicable and user-friendly SUV40 method allows feasible MATV measurement in prospective multicentre studies required for validation of this potential PET imaging biomarker for clinical use

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    Full automation of total metabolic tumor volume from FDG-PET/CT in DLBCL for baseline risk assessments

    Get PDF
    BACKGROUND: Current radiological assessments of (18)fluorodeoxyglucose-positron emission tomography (FDG-PET) imaging data in diffuse large B-cell lymphoma (DLBCL) can be time consuming, do not yield real-time information regarding disease burden and organ involvement, and hinder the use of FDG-PET to potentially limit the reliance on invasive procedures (e.g. bone marrow biopsy) for risk assessment. METHODS: Our aim is to enable real-time assessment of imaging-based risk factors at a large scale and we propose a fully automatic artificial intelligence (AI)-based tool to rapidly extract FDG-PET imaging metrics in DLBCL. On availability of a scan, in combination with clinical data, our approach generates clinically informative risk scores with minimal resource requirements. Overall, 1268 patients with previously untreated DLBCL from the phase III GOYA trial (NCT01287741) were included in the analysis (training: n = 846; hold-out: n = 422). RESULTS: Our AI-based model comprising imaging and clinical variables yielded a tangible prognostic improvement compared to clinical models without imaging metrics. We observed a risk increase for progression-free survival (PFS) with hazard ratios [HR] of 1.87 (95% CI: 1.31–2.67) vs 1.38 (95% CI: 0.98–1.96) (C-index: 0.59 vs 0.55), and a risk increase for overall survival (OS) (HR: 2.16 (95% CI: 1.37–3.40) vs 1.40 (95% CI: 0.90–2.17); C-index: 0.59 vs 0.55). The combined model defined a high-risk population with 35% and 42% increased odds of a 4-year PFS and OS event, respectively, versus the International Prognostic Index components alone. The method also identified a subpopulation with a 2-year Central Nervous System (CNS)-relapse probability of 17.1%. CONCLUSION: Our tool enables an enhanced risk stratification compared with IPI, and the results indicate that imaging can be used to improve the prediction of central nervous system relapse in DLBCL. These findings support integration of clinically informative AI-generated imaging metrics into clinical workflows to improve identification of high-risk DLBCL patients. TRIAL REGISTRATION: Registered clinicaltrials.gov number: NCT01287741. GRAPHICAL ABSTRACT: [Image: see text] SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s40644-022-00476-0
    • …
    corecore