135 research outputs found
Deep Neural Network with l2-norm Unit for Brain Lesions Detection
Automated brain lesions detection is an important and very challenging
clinical diagnostic task because the lesions have different sizes, shapes,
contrasts, and locations. Deep Learning recently has shown promising progress
in many application fields, which motivates us to apply this technology for
such important problem. In this paper, we propose a novel and end-to-end
trainable approach for brain lesions classification and detection by using deep
Convolutional Neural Network (CNN). In order to investigate the applicability,
we applied our approach on several brain diseases including high and low-grade
glioma tumor, ischemic stroke, Alzheimer diseases, by which the brain Magnetic
Resonance Images (MRI) have been applied as an input for the analysis. We
proposed a new operating unit which receives features from several projections
of a subset units of the bottom layer and computes a normalized l2-norm for
next layer. We evaluated the proposed approach on two different CNN
architectures and number of popular benchmark datasets. The experimental
results demonstrate the superior ability of the proposed approach.Comment: Accepted for presentation in ICONIP-201
Adversarial training and dilated convolutions for brain MRI segmentation
Convolutional neural networks (CNNs) have been applied to various automatic
image segmentation tasks in medical image analysis, including brain MRI
segmentation. Generative adversarial networks have recently gained popularity
because of their power in generating images that are difficult to distinguish
from real images.
In this study we use an adversarial training approach to improve CNN-based
brain MRI segmentation. To this end, we include an additional loss function
that motivates the network to generate segmentations that are difficult to
distinguish from manual segmentations. During training, this loss function is
optimised together with the conventional average per-voxel cross entropy loss.
The results show improved segmentation performance using this adversarial
training procedure for segmentation of two different sets of images and using
two different network architectures, both visually and in terms of Dice
coefficients.Comment: MICCAI 2017 Workshop on Deep Learning in Medical Image Analysi
Dilated Convolutional Neural Networks for Cardiovascular MR Segmentation in Congenital Heart Disease
We propose an automatic method using dilated convolutional neural networks
(CNNs) for segmentation of the myocardium and blood pool in cardiovascular MR
(CMR) of patients with congenital heart disease (CHD).
Ten training and ten test CMR scans cropped to an ROI around the heart were
provided in the MICCAI 2016 HVSMR challenge. A dilated CNN with a receptive
field of 131x131 voxels was trained for myocardium and blood pool segmentation
in axial, sagittal and coronal image slices. Performance was evaluated within
the HVSMR challenge.
Automatic segmentation of the test scans resulted in Dice indices of
0.800.06 and 0.930.02, average distances to boundaries of
0.960.31 and 0.890.24 mm, and Hausdorff distances of 6.133.76
and 7.073.01 mm for the myocardium and blood pool, respectively.
Segmentation took 41.514.7 s per scan.
In conclusion, dilated CNNs trained on a small set of CMR images of CHD
patients showing large anatomical variability provide accurate myocardium and
blood pool segmentations
Spectral Graph Convolutions for Population-based Disease Prediction
Exploiting the wealth of imaging and non-imaging information for disease
prediction tasks requires models capable of representing, at the same time,
individual features as well as data associations between subjects from
potentially large populations. Graphs provide a natural framework for such
tasks, yet previous graph-based approaches focus on pairwise similarities
without modelling the subjects' individual characteristics and features. On the
other hand, relying solely on subject-specific imaging feature vectors fails to
model the interaction and similarity between subjects, which can reduce
performance. In this paper, we introduce the novel concept of Graph
Convolutional Networks (GCN) for brain analysis in populations, combining
imaging and non-imaging data. We represent populations as a sparse graph where
its vertices are associated with image-based feature vectors and the edges
encode phenotypic information. This structure was used to train a GCN model on
partially labelled graphs, aiming to infer the classes of unlabelled nodes from
the node features and pairwise associations between subjects. We demonstrate
the potential of the method on the challenging ADNI and ABIDE databases, as a
proof of concept of the benefit from integrating contextual information in
classification tasks. This has a clear impact on the quality of the
predictions, leading to 69.5% accuracy for ABIDE (outperforming the current
state of the art of 66.8%) and 77% for ADNI for prediction of MCI conversion,
significantly outperforming standard linear classifiers where only individual
features are considered.Comment: International Conference on Medical Image Computing and
Computer-Assisted Interventions (MICCAI) 201
Tversky loss function for image segmentation using 3D fully convolutional deep networks
Fully convolutional deep neural networks carry out excellent potential for
fast and accurate image segmentation. One of the main challenges in training
these networks is data imbalance, which is particularly problematic in medical
imaging applications such as lesion segmentation where the number of lesion
voxels is often much lower than the number of non-lesion voxels. Training with
unbalanced data can lead to predictions that are severely biased towards high
precision but low recall (sensitivity), which is undesired especially in
medical applications where false negatives are much less tolerable than false
positives. Several methods have been proposed to deal with this problem
including balanced sampling, two step training, sample re-weighting, and
similarity loss functions. In this paper, we propose a generalized loss
function based on the Tversky index to address the issue of data imbalance and
achieve much better trade-off between precision and recall in training 3D fully
convolutional deep neural networks. Experimental results in multiple sclerosis
lesion segmentation on magnetic resonance images show improved F2 score, Dice
coefficient, and the area under the precision-recall curve in test data. Based
on these results we suggest Tversky loss function as a generalized framework to
effectively train deep neural networks
Max-Fusion U-Net for Multi-Modal Pathology Segmentation with Attention and Dynamic Resampling
Automatic segmentation of multi-sequence (multi-modal) cardiac MR (CMR)
images plays a significant role in diagnosis and management for a variety of
cardiac diseases. However, the performance of relevant algorithms is
significantly affected by the proper fusion of the multi-modal information.
Furthermore, particular diseases, such as myocardial infarction, display
irregular shapes on images and occupy small regions at random locations. These
facts make pathology segmentation of multi-modal CMR images a challenging task.
In this paper, we present the Max-Fusion U-Net that achieves improved pathology
segmentation performance given aligned multi-modal images of LGE, T2-weighted,
and bSSFP modalities. Specifically, modality-specific features are extracted by
dedicated encoders. Then they are fused with the pixel-wise maximum operator.
Together with the corresponding encoding features, these representations are
propagated to decoding layers with U-Net skip-connections. Furthermore, a
spatial-attention module is applied in the last decoding layer to encourage the
network to focus on those small semantically meaningful pathological regions
that trigger relatively high responses by the network neurons. We also use a
simple image patch extraction strategy to dynamically resample training
examples with varying spacial and batch sizes. With limited GPU memory, this
strategy reduces the imbalance of classes and forces the model to focus on
regions around the interested pathology. It further improves segmentation
accuracy and reduces the mis-classification of pathology. We evaluate our
methods using the Myocardial pathology segmentation (MyoPS) combining the
multi-sequence CMR dataset which involves three modalities. Extensive
experiments demonstrate the effectiveness of the proposed model which
outperforms the related baselines.Comment: 13 pages, 7 figures, conference pape
Hetero-Modal Variational Encoder-Decoder for Joint Modality Completion and Segmentation
We propose a new deep learning method for tumour segmentation when dealing
with missing imaging modalities. Instead of producing one network for each
possible subset of observed modalities or using arithmetic operations to
combine feature maps, our hetero-modal variational 3D encoder-decoder
independently embeds all observed modalities into a shared latent
representation. Missing data and tumour segmentation can be then generated from
this embedding. In our scenario, the input is a random subset of modalities. We
demonstrate that the optimisation problem can be seen as a mixture sampling. In
addition to this, we introduce a new network architecture building upon both
the 3D U-Net and the Multi-Modal Variational Auto-Encoder (MVAE). Finally, we
evaluate our method on BraTS2018 using subsets of the imaging modalities as
input. Our model outperforms the current state-of-the-art method for dealing
with missing modalities and achieves similar performance to the subset-specific
equivalent networks.Comment: Accepted at MICCAI 201
Automatic Brain Tumor Segmentation using Convolutional Neural Networks with Test-Time Augmentation
Automatic brain tumor segmentation plays an important role for diagnosis,
surgical planning and treatment assessment of brain tumors. Deep convolutional
neural networks (CNNs) have been widely used for this task. Due to the
relatively small data set for training, data augmentation at training time has
been commonly used for better performance of CNNs. Recent works also
demonstrated the usefulness of using augmentation at test time, in addition to
training time, for achieving more robust predictions. We investigate how
test-time augmentation can improve CNNs' performance for brain tumor
segmentation. We used different underpinning network structures and augmented
the image by 3D rotation, flipping, scaling and adding random noise at both
training and test time. Experiments with BraTS 2018 training and validation set
show that test-time augmentation helps to improve the brain tumor segmentation
accuracy and obtain uncertainty estimation of the segmentation results.Comment: 12 pages, 3 figures, MICCAI BrainLes 201
- …