11 research outputs found

    Training deep segmentation networks on texture-encoded input: application to neuroimaging of the developing neonatal brain

    No full text
    Standard practice for using convolutional neural networks (CNNs) in semantic segmentation tasks assumes that the image intensities are directly used for training and inference. In natural images this is performed using RGB pixel intensities, whereas in medical imaging, e.g. magnetic resonance imaging (MRI), gray level pixel intensities are typically used. In this work, we explore the idea of encoding the image data as local binary textural maps prior to the feeding them to CNNs, and show that accurate segmentation models can be developed using such maps alone, without learning any representations from the images themselves. This questions common consensus that CNNs recognize objects from images by learning increasingly complex representations of shape, and suggests a more important role to image texture, in line with recent findings on natural images. We illustrate this for the first time on neuroimaging data of the developing neonatal brain in a tissue segmentation task, by analyzing large, publicly available T2-weighted MRI scans (n=558, range of postmenstrual ages at scan: 24.3 - 42.2 weeks) obtained retrospectively from the {\em Developing Human Connectome Project} cohort. Rapid changes in visual characteristics that take place during early brain development make it important to establish a clear understanding of the role of visual texture when training CNN models on neuroimaging data of the neonatal brain; this yet remains a largely understudied but important area of research. From a deep learning perspective, the results suggest that CNNs could simply be capable of learning representations from structured spatial information, and may not necessarily require conventional images as input

    A deep learning approach to segmentation of the developing cortex in fetal brain MRI with minimal manual labeling.

    No full text
    We developed an automated system based on deep neural networks for fast and sensitive 3D image segmentation of cortical gray matter from fetal brain MRI. The lack of extensive/publicly available annotations presented a key challenge, as large amounts of labeled data are typically required for training sensitive models with deep learning. To address this, we: (i) generated preliminary tissue labels using the {\em Draw-EM} algorithm, which uses Expectation-Maximization and was originally designed for tissue segmentation in the neonatal domain; and (ii) employed a human-in-the-loop approach, whereby an expert fetal imaging annotator assessed and refined the performance of the model. By using a hybrid approach that combined automatically generated labels with manual refinements by an expert, we amplified the utility of ground truth annotations while immensely reducing their cost (283 slices). The deep learning system was developed, refined, and validated on 249 3D T2-weighted scans obtained from the {\em Developing Human Connectome Project}’s fetal cohort, acquired at 3T. Analysis of the system showed that it is invariant to gestational age at scan, as it generalized well to a wide age range (21 ïżœ 38 weeks) despite variations in cortical morphology and intensity across the fetal distribution. It was also found to be invariant to intensities in regions surrounding the brain (amniotic fluid), which often present a major obstacle to the processing of neuroimaging data in the fetal domain

    Differentiating Enhancing Multiple Sclerosis Lesions, Glioblastoma, and Lymphoma with Dynamic Texture Parameters Analysis (DTPA) - a Feasibility Study.

    No full text
    PURPOSE MR-imaging hallmarks of glioblastoma (GB), cerebral lymphoma (CL), and demyelinating lesions are gadolinium (Gd) uptake due to blood brain barrier disruption. Thus, initial diagnosis may be difficult based on conventional Gd enhanced MRI alone. Here, the added value of a dynamic texture parameter analysis (DTPA) in the differentiation between these three entities is examined. DTPA is an in-house software tool that incorporates the analysis of quantitative texture parameters extracted from dynamic susceptibility contrast enhanced (DSCE) images. METHODS Twelve patients with multiple sclerosis (MS), fifteen patients with GB, and five patients with CL were included. The image analysis method focuses on the DSCE-image time series during bolus passage. Three time intervals were examined: inflow, outflow, and reperfusion time interval. Texture maps were computed. From the DSCE image series mean, difference, standard deviation, and variance texture parameters were calculated and statistically analyzed and compared between the pathologies. RESULTS The texture parameters of the original DSCE-image series for mean, standard deviation and variance showed the most significant differences (p-value between <0.00 and 0.05) between pathologies. Further, the texture parameters related to the standard deviation or variance (both associated with tissue heterogeneity) revealed the strongest discriminations between the pathologies. CONCLUSION We conclude that dynamic perfusion texture parameters as assessed by the DTPA-method allow discriminating MS-, GB- and CL-lesions during the first passage of contrast. DTPA used in combination with classification algorithms have the potential to find the most likely diagnosis given a postulated differential diagnosis. This article is protected by copyright. All rights reserved

    Machine learning studies on major brain diseases: 5-year trends of 2014–2018

    No full text

    Diagnostic value of alternative techniques to gadolinium-based contrast agents in MR neuroimaging—a comprehensive overview

    No full text
    corecore