131 research outputs found

    Prospects for Theranostics in Neurosurgical Imaging: Empowering Confocal Laser Endomicroscopy Diagnostics via Deep Learning

    Get PDF
    Confocal laser endomicroscopy (CLE) is an advanced optical fluorescence imaging technology that has the potential to increase intraoperative precision, extend resection, and tailor surgery for malignant invasive brain tumors because of its subcellular dimension resolution. Despite its promising diagnostic potential, interpreting the gray tone fluorescence images can be difficult for untrained users. In this review, we provide a detailed description of bioinformatical analysis methodology of CLE images that begins to assist the neurosurgeon and pathologist to rapidly connect on-the-fly intraoperative imaging, pathology, and surgical observation into a conclusionary system within the concept of theranostics. We present an overview and discuss deep learning models for automatic detection of the diagnostic CLE images and discuss various training regimes and ensemble modeling effect on the power of deep learning predictive models. Two major approaches reviewed in this paper include the models that can automatically classify CLE images into diagnostic/nondiagnostic, glioma/nonglioma, tumor/injury/normal categories and models that can localize histological features on the CLE images using weakly supervised methods. We also briefly review advances in the deep learning approaches used for CLE image analysis in other organs. Significant advances in speed and precision of automated diagnostic frame selection would augment the diagnostic potential of CLE, improve operative workflow and integration into brain tumor surgery. Such technology and bioinformatics analytics lend themselves to improved precision, personalization, and theranostics in brain tumor treatment.Comment: See the final version published in Frontiers in Oncology here: https://www.frontiersin.org/articles/10.3389/fonc.2018.00240/ful

    Radiomics analyses for outcome prediction in patients with locally advanced rectal cancer and glioblastoma multiforme using multimodal imaging data

    Get PDF
    Personalized treatment strategies for oncological patient management can improve outcomes of patient populations with heterogeneous treatment response. The implementation of such a concept requires the identification of biomarkers that can precisely predict treatment outcome. In the context of this thesis, we develop and validate biomarkers from multimodal imaging data for the outcome prediction after treatment in patients with locally advanced rectal cancer (LARC) and in patients with newly diagnosed glioblastoma multiforme (GBM), using conventional feature-based radiomics and deep-learning (DL) based radiomics. For LARC patients, we identify promising radiomics signatures combining computed tomography (CT) and T2-weighted (T2-w) magnetic resonance imaging (MRI) with clinical parameters to predict tumour response to neoadjuvant chemoradiotherapy (nCRT). Further, the analyses of externally available radiomics models for LARC reveal a lack of reproducibility and the need for standardization of the radiomics process. For patients with GBM, we use postoperative [11C] methionine positron emission tomography (MET-PET) and gadolinium-enhanced T1-w MRI for the detection of the residual tumour status and to prognosticate time-to-recurrence (TTR) and overall survival (OS). We show that DL models built on MET-PET have an improved diagnostic and prognostic value as compared to MRI

    Impact of random outliers in auto-segmented targets on radiotherapy treatment plans for glioblastoma.

    Get PDF
    AIMS To save time and have more consistent contours, fully automatic segmentation of targets and organs at risk (OAR) is a valuable asset in radiotherapy. Though current deep learning (DL) based models are on par with manual contouring, they are not perfect and typical errors, as false positives, occur frequently and unpredictably. While it is possible to solve this for OARs, it is far from straightforward for target structures. In order to tackle this problem, in this study, we analyzed the occurrence and the possible dose effects of automated delineation outliers. METHODS First, a set of controlled experiments on synthetically generated outliers on the CT of a glioblastoma (GBM) patient was performed. We analyzed the dosimetric impact on outliers with different location, shape, absolute size and relative size to the main target, resulting in 61 simulated scenarios. Second, multiple segmentation models where trained on a U-Net network based on 80 training sets consisting of GBM cases with annotated gross tumor volume (GTV) and edema structures. On 20 test cases, 5 different trained models and a majority voting method were used to predict the GTV and edema. The amount of outliers on the predictions were determined, as well as their size and distance from the actual target. RESULTS We found that plans containing outliers result in an increased dose to healthy brain tissue. The extent of the dose effect is dependent on the relative size, location and the distance to the main targets and involved OARs. Generally, the larger the absolute outlier volume and the distance to the target the higher the potential dose effect. For 120 predicted GTV and edema structures, we found 1887 outliers. After construction of the planning treatment volume (PTV), 137 outliers remained with a mean distance to the target of 38.5 ± 5.0 mm and a mean size of 1010.8 ± 95.6 mm3. We also found that majority voting of DL results is capable to reduce outliers. CONCLUSIONS This study shows that there is a severe risk of false positive outliers in current DL predictions of target structures. Additionally, these errors will have an evident detrimental impact on the dose and therefore could affect treatment outcome

    Ensembles of Multiple Models and Architectures for Robust Brain Tumour Segmentation

    Get PDF
    Deep learning approaches such as convolutional neural nets have consistently outperformed previous methods on challenging tasks such as dense, semantic segmentation. However, the various proposed networks perform differently, with behaviour largely influenced by architectural choices and training settings. This paper explores Ensembles of Multiple Models and Architectures (EMMA) for robust performance through aggregation of predictions from a wide range of methods. The approach reduces the influence of the meta-parameters of individual models and the risk of overfitting the configuration to a particular database. EMMA can be seen as an unbiased, generic deep learning model which is shown to yield excellent performance, winning the first position in the BRATS 2017 competition among 50+ participating teams.Comment: The method won the 1st-place in the Brain Tumour Segmentation (BRATS) 2017 competition (segmentation task

    Uncertainty-driven refinement of tumor-core segmentation using 3D-to-2D networks with label uncertainty

    Full text link
    The BraTS dataset contains a mixture of high-grade and low-grade gliomas, which have a rather different appearance: previous studies have shown that performance can be improved by separated training on low-grade gliomas (LGGs) and high-grade gliomas (HGGs), but in practice this information is not available at test time to decide which model to use. By contrast with HGGs, LGGs often present no sharp boundary between the tumor core and the surrounding edema, but rather a gradual reduction of tumor-cell density. Utilizing our 3D-to-2D fully convolutional architecture, DeepSCAN, which ranked highly in the 2019 BraTS challenge and was trained using an uncertainty-aware loss, we separate cases into those with a confidently segmented core, and those with a vaguely segmented or missing core. Since by assumption every tumor has a core, we reduce the threshold for classification of core tissue in those cases where the core, as segmented by the classifier, is vaguely defined or missing. We then predict survival of high-grade glioma patients using a fusion of linear regression and random forest classification, based on age, number of distinct tumor components, and number of distinct tumor cores. We present results on the validation dataset of the Multimodal Brain Tumor Segmentation Challenge 2020 (segmentation and uncertainty challenge), and on the testing set, where the method achieved 4th place in Segmentation, 1st place in uncertainty estimation, and 1st place in Survival prediction.Comment: Presented (virtually) in the MICCAI Brainles workshop 2020. Accepted for publication in Brainles proceeding

    'A net for everyone': fully personalized and unsupervised neural networks trained with longitudinal data from a single patient

    Full text link
    With the rise in importance of personalized medicine, we trained personalized neural networks to detect tumor progression in longitudinal datasets. The model was evaluated on two datasets with a total of 64 scans from 32 patients diagnosed with glioblastoma multiforme (GBM). Contrast-enhanced T1w sequences of brain magnetic resonance imaging (MRI) images were used in this study. For each patient, we trained their own neural network using just two images from different timepoints. Our approach uses a Wasserstein-GAN (generative adversarial network), an unsupervised network architecture, to map the differences between the two images. Using this map, the change in tumor volume can be evaluated. Due to the combination of data augmentation and the network architecture, co-registration of the two images is not needed. Furthermore, we do not rely on any additional training data, (manual) annotations or pre-training neural networks. The model received an AUC-score of 0.87 for tumor change. We also introduced a modified RANO criteria, for which an accuracy of 66% can be achieved. We show that using data from just one patient can be used to train deep neural networks to monitor tumor change
    • …
    corecore