12 research outputs found
Evaluating automated longitudinal tumor measurements for glioblastoma response assessment.
Automated tumor segmentation tools for glioblastoma show promising performance. To apply these tools for automated response assessment, longitudinal segmentation, and tumor measurement, consistency is critical. This study aimed to determine whether BraTumIA and HD-GLIO are suited for this task. We evaluated two segmentation tools with respect to automated response assessment on the single-center retrospective LUMIERE dataset with 80 patients and a total of 502 post-operative time points. Volumetry and automated bi-dimensional measurements were compared with expert measurements following the Response Assessment in Neuro-Oncology (RANO) guidelines. The longitudinal trend agreement between the expert and methods was evaluated, and the RANO progression thresholds were tested against the expert-derived time-to-progression (TTP). The TTP and overall survival (OS) correlation was used to check the progression thresholds. We evaluated the automated detection and influence of non-measurable lesions. The tumor volume trend agreement calculated between segmentation volumes and the expert bi-dimensional measurements was high (HD-GLIO: 81.1%, BraTumIA: 79.7%). BraTumIA achieved the closest match to the expert TTP using the recommended RANO progression threshold. HD-GLIO-derived tumor volumes reached the highest correlation between TTP and OS (0.55). Both tools failed at an accurate lesion count across time. Manual false-positive removal and restricting to a maximum number of measurable lesions had no beneficial effect. Expert supervision and manual corrections are still necessary when applying the tested automated segmentation tools for automated response assessment. The longitudinal consistency of current segmentation tools needs further improvement. Validation of volumetric and bi-dimensional progression thresholds with multi-center studies is required to move toward volumetry-based response assessment
Automated liver segmental volume ratio quantification on non-contrast T1-Vibe Dixon liver MRI using deep learning.
PURPOSE
To evaluate the effectiveness of automated liver segmental volume quantification and calculation of the liver segmental volume ratio (LSVR) on a non-contrast T1-vibe Dixon liver MRI sequence using a deep learning segmentation pipeline.
METHOD
A dataset of 200 liver MRI with a non-contrast 3 mm T1-vibe Dixon sequence was manually labeledslice-by-sliceby an expert for Couinaud liver segments, while portal and hepatic veins were labeled separately. A convolutional neural networkwas trainedusing 170 liver MRI for training and 30 for evaluation. Liver segmental volumes without liver vessels were retrieved and LSVR was calculated as the liver segmental volumes I-III divided by the liver segmental volumes IV-VIII. LSVR was compared with the expert manual LSVR calculation and the LSVR calculated on CT scans in 30 patients with CT and MRI within 6 months.
RESULTS
Theconvolutional neural networkclassified the Couinaud segments I-VIII with an average Dice score of 0.770 ± 0.03, ranging between 0.726 ± 0.13 (segment IVb) and 0.810 ± 0.09 (segment V). The calculated mean LSVR with liver MRI unseen by the model was 0.32 ± 0.14, as compared with manually quantified LSVR of 0.33 ± 0.15, resulting in a mean absolute error (MAE) of 0.02. A comparable LSVR of 0.35 ± 0.14 with a MAE of 0.04 resulted with the LSRV retrieved from the CT scans. The automated LSVR showed significant correlation with the manual MRI LSVR (Spearman r = 0.97, p < 0.001) and CT LSVR (Spearman r = 0.95, p < 0.001).
CONCLUSIONS
A convolutional neural network allowed for accurate automated liver segmental volume quantification and calculation of LSVR based on a non-contrast T1-vibe Dixon sequence
Convolutional neural network for automated segmentation of the liver and its vessels on non-contrast T1 vibe Dixon acquisitions
We evaluated the effectiveness of automated segmentation of the liver and its vessels with a convolutional neural network on non-contrast T1 vibe Dixon acquisitions. A dataset of non-contrast T1 vibe Dixon liver magnetic resonance images was labelled slice-by-slice for the outer liver border, portal, and hepatic veins by an expert. A 3D U-Net convolutional neural network was trained with different combinations of Dixon in-phase, opposed-phase, water, and fat reconstructions. The neural network trained with the single-modal in-phase reconstructions achieved a high performance for liver parenchyma (Dice 0.936 ± 0.02), portal veins (0.634 ± 0.09), and hepatic veins (0.532 ± 0.12) segmentation. No benefit of using multi -modal input was observed (p=1.0 for all experiments), combining in-phase, opposed-phase, fat, and water reconstruction. Accuracy for differentiation between portal and hepatic veins was 99% for portal veins and 97% for hepatic veins in the central region and slightly lower in the peripheral region (91% for portal veins, 80% for hepatic veins). In conclusion, deep learning-based automated segmentation of the liver and its vessels on non-contrast T1 vibe Dixon was highly effective. The single-modal in-phase input achieved the best performance in segmentation and differentiation between portal and hepatic veins
Advanced Machine Learning Technologies for Robust Longitudinal Radiomics and Response Assessment in Glioblastoma Multiforme
Glioblastoma multiforme is the most frequent and aggressive primary brain tumor in humans. Due to its fast growth and infiltrative nature, glioblastoma patients only have a median survival of 15 months. The fast disease progression and low overall survival time make close disease monitoring necessary. Currently, a patient’s response to treatment is assessed based on Magnetic Resonance Imaging (MRI), acquired approximately every three months. Due to the prohibitively extensive effort to manually segment the tumor, two-dimensional surrogate measurements of the tumor burden are currently used to evaluate treatment response.
Advances in radiomics in conjunction with machine learning allow extraction of information from medical images beyond visual assessment and analysis of subtle changes. These radiomic features often lack robustness in multi-center settings with different MRI scanner vendors, models, and acquisition protocols.
This thesis investigates advanced machine learning techniques and radiomics on magnetic resonance imaging with radiomics for overall survival analysis, longitudinal volumetry, and disease progression biomarkers.
We first present data-driven insights from our single-center glioblastoma patient population, followed by studies on deep learning and machine learning approaches to overall survival prediction from pre-operative MRI. The challenge to find robust radiomic features is addressed by artificially perturbing single-center data to minimize a loss in machine learning performance when transferred to multi-center data.
We then evaluate the applicability of automated tumor volumetry for longitudinal response assessment and present a first study on evaluating and learning radiomic disease progression biomarkers.
Our results show that the performance drop on multi-center data can be effectively reduced with tailored robustness testing. Features showed a high sensitivity to histogram binning and other perturbations such as voxel size and slice spacing changes. Longitudinal volumetry and automated two-dimensional measurements simulating the current practice show a high agreement, but close expert monitoring and safeguards are still needed for response assessment. We further present encouraging results to use radiomic features as progression biomarkers, with the most promising candidates stemming from a deep multi-task neural network
Recommended from our members
Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge
Gliomas are the most common primary brain malignancies, with different
degrees of aggressiveness, variable prognosis and various heterogeneous
histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic
core, active and non-enhancing core. This intrinsic heterogeneity is also
portrayed in their radio-phenotype, as their sub-regions are depicted by
varying intensity profiles disseminated across multi-parametric magnetic
resonance imaging (mpMRI) scans, reflecting varying biological properties.
Their heterogeneous shape, extent, and location are some of the factors that
make these tumors difficult to resect, and in some cases inoperable. The amount
of resected tumor is a factor also considered in longitudinal scans, when
evaluating the apparent tumor for potential diagnosis of progression.
Furthermore, there is mounting evidence that accurate segmentation of the
various tumor sub-regions can offer the basis for quantitative image analysis
towards prediction of patient overall survival. This study assesses the
state-of-the-art machine learning (ML) methods used for brain tumor image
analysis in mpMRI scans, during the last seven instances of the International
Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we
focus on i) evaluating segmentations of the various glioma sub-regions in
pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue
of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO
criteria, and iii) predicting the overall survival from pre-operative mpMRI
scans of patients that underwent gross total resection. Finally, we investigate
the challenge of identifying the best ML algorithms for each of these tasks,
considering that apart from being diverse on each instance of the challenge,
the multi-institutional mpMRI BraTS dataset has also been a continuously
evolving/growing dataset