5 research outputs found
Simultaneous lesion and neuroanatomy segmentation in Multiple Sclerosis using deep neural networks
Segmentation of both white matter lesions and deep grey matter structures is
an important task in the quantification of magnetic resonance imaging in
multiple sclerosis. Typically these tasks are performed separately: in this
paper we present a single segmentation solution based on convolutional neural
networks (CNNs) for providing fast, reliable segmentations of multimodal
magnetic resonance images into lesion classes and normal-appearing grey- and
white-matter structures. We show substantial, statistically significant
improvements in both Dice coefficient and in lesion-wise specificity and
sensitivity, compared to previous approaches, and agreement with individual
human raters in the range of human inter-rater variability. The method is
trained on data gathered from a single centre: nonetheless, it performs well on
data from centres, scanners and field-strengths not represented in the training
dataset. A retrospective study found that the classifier successfully
identified lesions missed by the human raters.
Lesion labels were provided by human raters, while weak labels for other
brain structures (including CSF, cortical grey matter, cortical white matter,
cerebellum, amygdala, hippocampus, subcortical GM structures and choroid
plexus) were provided by Freesurfer 5.3. The segmentations of these structures
compared well, not only with Freesurfer 5.3, but also with FSL-First and
Freesurfer 6.0
Triplanar 3D-to-2D networks with dense connections and dilated convolutions: application to the KITS 2019 challenge
We describe a method for the segmentation of kidney and kidney tumors based on computed tomography imaging, based on the KITS 2019 challenge dataset
Uncertainty-driven refinement of tumor-core segmentation using 3D-to-2D networks with label uncertainty
The BraTS dataset contains a mixture of high-grade and low-grade gliomas,
which have a rather different appearance: previous studies have shown that
performance can be improved by separated training on low-grade gliomas (LGGs)
and high-grade gliomas (HGGs), but in practice this information is not
available at test time to decide which model to use. By contrast with HGGs,
LGGs often present no sharp boundary between the tumor core and the surrounding
edema, but rather a gradual reduction of tumor-cell density.
Utilizing our 3D-to-2D fully convolutional architecture, DeepSCAN, which
ranked highly in the 2019 BraTS challenge and was trained using an
uncertainty-aware loss, we separate cases into those with a confidently
segmented core, and those with a vaguely segmented or missing core. Since by
assumption every tumor has a core, we reduce the threshold for classification
of core tissue in those cases where the core, as segmented by the classifier,
is vaguely defined or missing.
We then predict survival of high-grade glioma patients using a fusion of
linear regression and random forest classification, based on age, number of
distinct tumor components, and number of distinct tumor cores.
We present results on the validation dataset of the Multimodal Brain Tumor
Segmentation Challenge 2020 (segmentation and uncertainty challenge), and on
the testing set, where the method achieved 4th place in Segmentation, 1st place
in uncertainty estimation, and 1st place in Survival prediction.Comment: Presented (virtually) in the MICCAI Brainles workshop 2020. Accepted
for publication in Brainles proceeding
Identification of morphological fingerprint in perinatal brains using quasi-conformal mapping and contrastive learning
The morphological fingerprint in the brain is capable of identifying the
uniqueness of an individual. However, whether such individual patterns are
present in perinatal brains, and which morphological attributes or cortical
regions better characterize the individual differences of ne-onates remain
unclear. In this study, we proposed a deep learning framework that projected
three-dimensional spherical meshes of three morphological features (i.e.,
cortical thickness, mean curvature, and sulcal depth) onto two-dimensional
planes through quasi-conformal mapping, and employed the ResNet18 and
contrastive learning for individual identification. We used the cross-sectional
structural MRI data of 682 infants, incorporating with data augmentation, to
train the model and fine-tuned the parameters based on 60 infants who had
longitudinal scans. The model was validated on 30 longitudinal scanned infant
data, and remarkable Top1 and Top5 accuracies of 71.37% and 84.10% were
achieved, respectively. The sensorimotor and visual cortices were recognized as
the most contributive regions in individual identification. Moreover, the
folding morphology demonstrated greater discriminative capability than the
cortical thickness, which could serve as the morphological fingerprint in
perinatal brains. These findings provided evidence for the emergence of
morphological fingerprints in the brain at the beginning of the third
trimester, which may hold promising implications for understanding the
formation of in-dividual uniqueness in the brain during early development
Applications of Deep Learning Techniques for Automated Multiple Sclerosis Detection Using Magnetic Resonance Imaging: A Review
Multiple Sclerosis (MS) is a type of brain disease which causes visual, sensory, and motor problems for people with a detrimental effect on the functioning of the nervous system. In order to diagnose MS, multiple screening methods have been proposed so far; among them, magnetic resonance imaging (MRI) has received considerable attention among physicians. MRI modalities provide physicians with fundamental information about the structure and function of the brain, which is crucial for the rapid diagnosis of MS lesions. Diagnosing MS using MRI is time-consuming, tedious, and prone to manual errors. Research on the implementation of computer aided diagnosis system (CADS) based on artificial intelligence (AI) to diagnose MS involves conventional machine learning and deep learning (DL) methods. In conventional machine learning, feature extraction, feature selection, and classification steps are carried out by using trial and error; on the contrary, these steps in DL are based on deep layers whose values are automatically learn. In this paper, a complete review of automated MS diagnosis methods performed using DL techniques with MRI neuroimaging modalities is provided. Initially, the steps involved in various CADS proposed using MRI modalities and DL techniques for MS diagnosis are investigated. The important preprocessing techniques employed in various works are analyzed. Most of the published papers on MS diagnosis using MRI modalities and DL are presented. The most significant challenges facing and future direction of automated diagnosis of MS using MRI modalities and DL techniques are also provided