204 research outputs found

    Dilated Inception U-Net (DIU-Net) for Brain Tumor Segmentation

    Get PDF
    Magnetic resonance imaging (MRI) is routinely used for brain tumor diagnosis, treatment planning, and post-treatment surveillance. Recently, various models based on deep neural networks have been proposed for the pixel-level segmentation of tumors in brain MRIs. However, the structural variations, spatial dissimilarities, and intensity inhomogeneity in MRIs make segmentation a challenging task. We propose a new end-to-end brain tumor segmentation architecture based on U-Net that integrates Inception modules and dilated convolutions into its contracting and expanding paths. This allows us to extract local structural as well as global contextual information. We performed segmentation of glioma sub-regions, including tumor core, enhancing tumor, and whole tumor using Brain Tumor Segmentation (BraTS) 2018 dataset. Our proposed model performed significantly better than the state-of-the-art U-Net-based model (p\u3c0.05) for tumor core and whole tumor segmentation

    Inception Modules Enhance Brain Tumor Segmentation.

    Get PDF
    Magnetic resonance images of brain tumors are routinely used in neuro-oncology clinics for diagnosis, treatment planning, and post-treatment tumor surveillance. Currently, physicians spend considerable time manually delineating different structures of the brain. Spatial and structural variations, as well as intensity inhomogeneity across images, make the problem of computer-assisted segmentation very challenging. We propose a new image segmentation framework for tumor delineation that benefits from two state-of-the-art machine learning architectures in computer vision, i.e., Inception modules and U-Net image segmentation architecture. Furthermore, our framework includes two learning regimes, i.e., learning to segment intra-tumoral structures (necrotic and non-enhancing tumor core, peritumoral edema, and enhancing tumor) or learning to segment glioma sub-regions (whole tumor, tumor core, and enhancing tumor). These learning regimes are incorporated into a newly proposed loss function which is based on the Dice similarity coefficient (DSC). In our experiments, we quantified the impact of introducing the Inception modules in the U-Net architecture, as well as, changing the objective function for the learning algorithm from segmenting the intra-tumoral structures to glioma sub-regions. We found that incorporating Inception modules significantly improved the segmentation performance (p \u3c 0.001) for all glioma sub-regions. Moreover, in architectures with Inception modules, the models trained with the learning objective of segmenting the intra-tumoral structures outperformed the models trained with the objective of segmenting the glioma sub-regions for the whole tumor (p \u3c 0.001). The improved performance is linked to multiscale features extracted by newly introduced Inception module and the modified loss function based on the DSC

    PremiUm-CNN: Propagating Uncertainty Towards Robust Convolutional Neural Networks

    Get PDF
    Deep neural networks (DNNs) have surpassed human-level accuracy in various learning tasks. However, unlike humans who have a natural cognitive intuition for probabilities, DNNs cannot express their uncertainty in the output decisions. This limits the deployment of DNNs in mission-critical domains, such as warfighter decision-making or medical diagnosis. Bayesian inference provides a principled approach to reason about model\u27s uncertainty by estimating the posterior distribution of the unknown parameters. The challenge in DNNs remains the multi-layer stages of non-linearities, which make the propagation of high-dimensional distributions mathematically intractable. This paper establishes the theoretical and algorithmic foundations of uncertainty or belief propagation by developing new deep learning models named PremiUm-CNNs (Propagating Uncertainty in Convolutional Neural Networks). We introduce a tensor normal distribution as a prior over convolutional kernels and estimate the variational posterior by maximizing the evidence lower bound (ELBO). We start by deriving the first-order mean-covariance propagation framework. Later, we develop a framework based on the unscented transformation (correct at least up to the second-order) that propagates sigma points of the variational distribution through layers of a CNN. The propagated covariance of the predictive distribution captures uncertainty in the output decision. Comprehensive experiments conducted on diverse benchmark datasets demonstrate: 1) superior robustness against noise and adversarial attacks, 2) self-assessment through predictive uncertainty that increases quickly with increasing levels of noise or attacks, and 3) an ability to detect a targeted attack from ambient noise

    SUPER-Net: Trustworthy Medical Image Segmentation with Uncertainty Propagation in Encoder-Decoder Networks

    Full text link
    Deep Learning (DL) holds great promise in reshaping the healthcare industry owing to its precision, efficiency, and objectivity. However, the brittleness of DL models to noisy and out-of-distribution inputs is ailing their deployment in the clinic. Most models produce point estimates without further information about model uncertainty or confidence. This paper introduces a new Bayesian DL framework for uncertainty quantification in segmentation neural networks: SUPER-Net: trustworthy medical image Segmentation with Uncertainty Propagation in Encoder-decodeR Networks. SUPER-Net analytically propagates, using Taylor series approximations, the first two moments (mean and covariance) of the posterior distribution of the model parameters across the nonlinear layers. In particular, SUPER-Net simultaneously learns the mean and covariance without expensive post-hoc Monte Carlo sampling or model ensembling. The output consists of two simultaneous maps: the segmented image and its pixelwise uncertainty map, which corresponds to the covariance matrix of the predictive distribution. We conduct an extensive evaluation of SUPER-Net on medical image segmentation of Magnetic Resonances Imaging and Computed Tomography scans under various noisy and adversarial conditions. Our experiments on multiple benchmark datasets demonstrate that SUPER-Net is more robust to noise and adversarial attacks than state-of-the-art segmentation models. Moreover, the uncertainty map of the proposed SUPER-Net associates low confidence (or equivalently high uncertainty) to patches in the test input images that are corrupted with noise, artifacts, or adversarial attacks. Perhaps more importantly, the model exhibits the ability of self-assessment of its segmentation decisions, notably when making erroneous predictions due to noise or adversarial examples

    Trustworthy Medical Segmentation with Uncertainty Estimation

    Get PDF
    Deep Learning (DL) holds great promise in reshaping the healthcare systems given its precision, efficiency, and objectivity. However, the brittleness of DL models to noisy and out-of-distribution inputs is ailing their deployment in the clinic. Most systems produce point estimates without further information about model uncertainty or confidence. This paper introduces a new Bayesian deep learning framework for uncertainty quantification in segmentation neural networks, specifically encoder-decoder architectures. The proposed framework uses the first-order Taylor series approximation to propagate and learn the first two moments (mean and covariance) of the distribution of the model parameters given the training data by maximizing the evidence lower bound. The output consists of two maps: the segmented image and the uncertainty map of the segmentation. The uncertainty in the segmentation decisions is captured by the covariance matrix of the predictive distribution. We evaluate the proposed framework on medical image segmentation data from Magnetic Resonances Imaging and Computed Tomography scans. Our experiments on multiple benchmark datasets demonstrate that the proposed framework is more robust to noise and adversarial attacks as compared to state-of-the-art segmentation models. Moreover, the uncertainty map of the proposed framework associates low confidence (or equivalently high uncertainty) to patches in the test input images that are corrupted with noise, artifacts or adversarial attacks. Thus, the model can self-assess its segmentation decisions when it makes an erroneous prediction or misses part of the segmentation structures, e.g., tumor, by presenting higher values in the uncertainty map

    Dynamics of the Drosophila Circadian Clock: Theoretical Anti-Jitter Network and Controlled Chaos

    Get PDF
    Background: Electronic clocks exhibit undesirable jitter or time variations in periodic signals. The circadian clocks of humans, some animals, and plants consist of oscillating molecular networks with peak-to-peak time of approximately 24 hours. Clockwork orange (CWO) is a transcriptional repressor of Drosophila direct target genes. Methodology/Principal Findings: Theory and data from a model of the Drosophila circadian clock support the idea that CWO controls anti-jitter negative circuits that stabilize peak-to-peak time in light-dark cycles (LD). The orbit is confined to chaotic attractors in both LD and dark cycles and is almost periodic in LD; furthermore, CWO diminishes the Euclidean dimension of the chaotic attractor in LD. Light resets the clock each day by restricting each molecular peak to the proximity of a prescribed time. Conclusions/Significance: The theoretical results suggest that chaos plays a central role in the dynamics of the Drosophila circadian clock and that a single molecule, CWO, may sense jitter and repress it by its negative loops

    Diagnosing growth in low-grade gliomas with and without longitudinal volume measurements: A retrospective observational study.

    Get PDF
    BACKGROUND: Low-grade gliomas cause significant neurological morbidity by brain invasion. There is no universally accepted objective technique available for detection of enlargement of low-grade gliomas in the clinical setting; subjective evaluation by clinicians using visual comparison of longitudinal radiological studies is the gold standard. The aim of this study is to determine whether a computer-assisted diagnosis (CAD) method helps physicians detect earlier growth of low-grade gliomas. METHODS AND FINDINGS: We reviewed 165 patients diagnosed with grade 2 gliomas, seen at the University of Alabama at Birmingham clinics from 1 July 2017 to 14 May 2018. MRI scans were collected during the spring and summer of 2018. Fifty-six gliomas met the inclusion criteria, including 19 oligodendrogliomas, 26 astrocytomas, and 11 mixed gliomas in 30 males and 26 females with a mean age of 48 years and a range of follow-up of 150.2 months (difference between highest and lowest values). None received radiation therapy. We also studied 7 patients with an imaging abnormality without pathological diagnosis, who were clinically stable at the time of retrospective review (14 May 2018). This study compared growth detection by 7 physicians aided by the CAD method with retrospective clinical reports. The tumors of 63 patients (56 + 7) in 627 MRI scans were digitized, including 34 grade 2 gliomas with radiological progression and 22 radiologically stable grade 2 gliomas. The CAD method consisted of tumor segmentation, computing volumes, and pointing to growth by the online abrupt change-of-point method, which considers only past measurements. Independent scientists have evaluated the segmentation method. In 29 of the 34 patients with progression, the median time to growth detection was only 14 months for CAD compared to 44 months for current standard of care radiological evaluation (p \u3c 0.001). Using CAD, accurate detection of tumor enlargement was possible with a median of only 57% change in the tumor volume as compared to a median of 174% change of volume necessary to diagnose tumor growth using standard of care clinical methods (p \u3c 0.001). In the radiologically stable group, CAD facilitated growth detection in 13 out of 22 patients. CAD did not detect growth in the imaging abnormality group. The main limitation of this study was its retrospective design; nevertheless, the results depict the current state of a gold standard in clinical practice that allowed a significant increase in tumor volumes from baseline before detection. Such large increases in tumor volume would not be permitted in a prospective design. The number of glioma patients (n = 56) is a limitation; however, it is equivalent to the number of patients in phase II clinical trials. CONCLUSIONS: The current practice of visual comparison of longitudinal MRI scans is associated with significant delays in detecting growth of low-grade gliomas. Our findings support the idea that physicians aided by CAD detect growth at significantly smaller volumes than physicians using visual comparison alone. This study does not answer the questions whether to treat or not and which treatment modality is optimal. Nonetheless, early growth detection sets the stage for future clinical studies that address these questions and whether early therapeutic interventions prolong survival and improve quality of life
    • …
    corecore