2,507 research outputs found
Multi-branch Convolutional Neural Network for Multiple Sclerosis Lesion Segmentation
In this paper, we present an automated approach for segmenting multiple
sclerosis (MS) lesions from multi-modal brain magnetic resonance images. Our
method is based on a deep end-to-end 2D convolutional neural network (CNN) for
slice-based segmentation of 3D volumetric data. The proposed CNN includes a
multi-branch downsampling path, which enables the network to encode information
from multiple modalities separately. Multi-scale feature fusion blocks are
proposed to combine feature maps from different modalities at different stages
of the network. Then, multi-scale feature upsampling blocks are introduced to
upsize combined feature maps to leverage information from lesion shape and
location. We trained and tested the proposed model using orthogonal plane
orientations of each 3D modality to exploit the contextual information in all
directions. The proposed pipeline is evaluated on two different datasets: a
private dataset including 37 MS patients and a publicly available dataset known
as the ISBI 2015 longitudinal MS lesion segmentation challenge dataset,
consisting of 14 MS patients. Considering the ISBI challenge, at the time of
submission, our method was amongst the top performing solutions. On the private
dataset, using the same array of performance metrics as in the ISBI challenge,
the proposed approach shows high improvements in MS lesion segmentation
compared with other publicly available tools.Comment: This paper has been accepted for publication in NeuroImag
Soft Null Hypotheses: A Case Study of Image Enhancement Detection in Brain Lesions
This work is motivated by a study of a population of multiple sclerosis (MS)
patients using dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI)
to identify active brain lesions. At each visit, a contrast agent is
administered intravenously to a subject and a series of images is acquired to
reveal the location and activity of MS lesions within the brain. Our goal is to
identify and quantify lesion enhancement location at the subject level and
lesion enhancement patterns at the population level. With this example, we aim
to address the difficult problem of transforming a qualitative scientific null
hypothesis, such as "this voxel does not enhance", to a well-defined and
numerically testable null hypothesis based on existing data. We call the
procedure "soft null hypothesis" testing as opposed to the standard "hard null
hypothesis" testing. This problem is fundamentally different from: 1) testing
when a quantitative null hypothesis is given; 2) clustering using a mixture
distribution; or 3) identifying a reasonable threshold with a parametric null
assumption. We analyze a total of 20 subjects scanned at 63 visits (~30Gb), the
largest population of such clinical brain images
Deep Neural Network with l2-norm Unit for Brain Lesions Detection
Automated brain lesions detection is an important and very challenging
clinical diagnostic task because the lesions have different sizes, shapes,
contrasts, and locations. Deep Learning recently has shown promising progress
in many application fields, which motivates us to apply this technology for
such important problem. In this paper, we propose a novel and end-to-end
trainable approach for brain lesions classification and detection by using deep
Convolutional Neural Network (CNN). In order to investigate the applicability,
we applied our approach on several brain diseases including high and low-grade
glioma tumor, ischemic stroke, Alzheimer diseases, by which the brain Magnetic
Resonance Images (MRI) have been applied as an input for the analysis. We
proposed a new operating unit which receives features from several projections
of a subset units of the bottom layer and computes a normalized l2-norm for
next layer. We evaluated the proposed approach on two different CNN
architectures and number of popular benchmark datasets. The experimental
results demonstrate the superior ability of the proposed approach.Comment: Accepted for presentation in ICONIP-201
Simultaneous lesion and neuroanatomy segmentation in Multiple Sclerosis using deep neural networks
Segmentation of both white matter lesions and deep grey matter structures is
an important task in the quantification of magnetic resonance imaging in
multiple sclerosis. Typically these tasks are performed separately: in this
paper we present a single segmentation solution based on convolutional neural
networks (CNNs) for providing fast, reliable segmentations of multimodal
magnetic resonance images into lesion classes and normal-appearing grey- and
white-matter structures. We show substantial, statistically significant
improvements in both Dice coefficient and in lesion-wise specificity and
sensitivity, compared to previous approaches, and agreement with individual
human raters in the range of human inter-rater variability. The method is
trained on data gathered from a single centre: nonetheless, it performs well on
data from centres, scanners and field-strengths not represented in the training
dataset. A retrospective study found that the classifier successfully
identified lesions missed by the human raters.
Lesion labels were provided by human raters, while weak labels for other
brain structures (including CSF, cortical grey matter, cortical white matter,
cerebellum, amygdala, hippocampus, subcortical GM structures and choroid
plexus) were provided by Freesurfer 5.3. The segmentations of these structures
compared well, not only with Freesurfer 5.3, but also with FSL-First and
Freesurfer 6.0
Visual and Contextual Modeling for the Detection of Repeated Mild Traumatic Brain Injury.
Currently, there is a lack of computational methods for the evaluation of mild traumatic brain injury (mTBI) from magnetic resonance imaging (MRI). Further, the development of automated analyses has been hindered by the subtle nature of mTBI abnormalities, which appear as low contrast MR regions. This paper proposes an approach that is able to detect mTBI lesions by combining both the high-level context and low-level visual information. The contextual model estimates the progression of the disease using subject information, such as the time since injury and the knowledge about the location of mTBI. The visual model utilizes texture features in MRI along with a probabilistic support vector machine to maximize the discrimination in unimodal MR images. These two models are fused to obtain a final estimate of the locations of the mTBI lesion. The models are tested using a novel rodent model of repeated mTBI dataset. The experimental results demonstrate that the fusion of both contextual and visual textural features outperforms other state-of-the-art approaches. Clinically, our approach has the potential to benefit both clinicians by speeding diagnosis and patients by improving clinical care
Boosting multiple sclerosis lesion segmentation through attention mechanism
Magnetic resonance imaging is a fundamental tool to reach a diagnosis of
multiple sclerosis and monitoring its progression. Although several attempts
have been made to segment multiple sclerosis lesions using artificial
intelligence, fully automated analysis is not yet available. State-of-the-art
methods rely on slight variations in segmentation architectures (e.g. U-Net,
etc.). However, recent research has demonstrated how exploiting temporal-aware
features and attention mechanisms can provide a significant boost to
traditional architectures. This paper proposes a framework that exploits an
augmented U-Net architecture with a convolutional long short-term memory layer
and attention mechanism which is able to segment and quantify multiple
sclerosis lesions detected in magnetic resonance images. Quantitative and
qualitative evaluation on challenging examples demonstrated how the method
outperforms previous state-of-the-art approaches, reporting an overall Dice
score of 89% and also demonstrating robustness and generalization ability on
never seen new test samples of a new dedicated under construction dataset
Uncovering convolutional neural network decisions for diagnosing multiple sclerosis on conventional MRI using layer-wise relevance propagation
Machine learning-based imaging diagnostics has recently reached or even
superseded the level of clinical experts in several clinical domains. However,
classification decisions of a trained machine learning system are typically
non-transparent, a major hindrance for clinical integration, error tracking or
knowledge discovery. In this study, we present a transparent deep learning
framework relying on convolutional neural networks (CNNs) and layer-wise
relevance propagation (LRP) for diagnosing multiple sclerosis (MS). MS is
commonly diagnosed utilizing a combination of clinical presentation and
conventional magnetic resonance imaging (MRI), specifically the occurrence and
presentation of white matter lesions in T2-weighted images. We hypothesized
that using LRP in a naive predictive model would enable us to uncover relevant
image features that a trained CNN uses for decision-making. Since imaging
markers in MS are well-established this would enable us to validate the
respective CNN model. First, we pre-trained a CNN on MRI data from the
Alzheimer's Disease Neuroimaging Initiative (n = 921), afterwards specializing
the CNN to discriminate between MS patients and healthy controls (n = 147).
Using LRP, we then produced a heatmap for each subject in the holdout set
depicting the voxel-wise relevance for a particular classification decision.
The resulting CNN model resulted in a balanced accuracy of 87.04% and an area
under the curve of 96.08% in a receiver operating characteristic curve. The
subsequent LRP visualization revealed that the CNN model focuses indeed on
individual lesions, but also incorporates additional information such as lesion
location, non-lesional white matter or gray matter areas such as the thalamus,
which are established conventional and advanced MRI markers in MS. We conclude
that LRP and the proposed framework have the capability to make diagnostic
decisions of..
- …