361 research outputs found
Deep Neural Network with l2-norm Unit for Brain Lesions Detection
Automated brain lesions detection is an important and very challenging
clinical diagnostic task because the lesions have different sizes, shapes,
contrasts, and locations. Deep Learning recently has shown promising progress
in many application fields, which motivates us to apply this technology for
such important problem. In this paper, we propose a novel and end-to-end
trainable approach for brain lesions classification and detection by using deep
Convolutional Neural Network (CNN). In order to investigate the applicability,
we applied our approach on several brain diseases including high and low-grade
glioma tumor, ischemic stroke, Alzheimer diseases, by which the brain Magnetic
Resonance Images (MRI) have been applied as an input for the analysis. We
proposed a new operating unit which receives features from several projections
of a subset units of the bottom layer and computes a normalized l2-norm for
next layer. We evaluated the proposed approach on two different CNN
architectures and number of popular benchmark datasets. The experimental
results demonstrate the superior ability of the proposed approach.Comment: Accepted for presentation in ICONIP-201
Hypothesis Disparity Regularized Mutual Information Maximization
We propose a hypothesis disparity regularized mutual information
maximization~(HDMI) approach to tackle unsupervised hypothesis transfer -- as
an effort towards unifying hypothesis transfer learning (HTL) and unsupervised
domain adaptation (UDA) -- where the knowledge from a source domain is
transferred solely through hypotheses and adapted to the target domain in an
unsupervised manner. In contrast to the prevalent HTL and UDA approaches that
typically use a single hypothesis, HDMI employs multiple hypotheses to leverage
the underlying distributions of the source and target hypotheses. To better
utilize the crucial relationship among different hypotheses -- as opposed to
unconstrained optimization of each hypothesis independently -- while adapting
to the unlabeled target domain through mutual information maximization, HDMI
incorporates a hypothesis disparity regularization that coordinates the target
hypotheses jointly learn better target representations while preserving more
transferable source knowledge with better-calibrated prediction uncertainty.
HDMI achieves state-of-the-art adaptation performance on benchmark datasets for
UDA in the context of HTL, without the need to access the source data during
the adaptation.Comment: Accepted to AAAI 202
Adversarial training and dilated convolutions for brain MRI segmentation
Convolutional neural networks (CNNs) have been applied to various automatic
image segmentation tasks in medical image analysis, including brain MRI
segmentation. Generative adversarial networks have recently gained popularity
because of their power in generating images that are difficult to distinguish
from real images.
In this study we use an adversarial training approach to improve CNN-based
brain MRI segmentation. To this end, we include an additional loss function
that motivates the network to generate segmentations that are difficult to
distinguish from manual segmentations. During training, this loss function is
optimised together with the conventional average per-voxel cross entropy loss.
The results show improved segmentation performance using this adversarial
training procedure for segmentation of two different sets of images and using
two different network architectures, both visually and in terms of Dice
coefficients.Comment: MICCAI 2017 Workshop on Deep Learning in Medical Image Analysi
Dilated Convolutional Neural Networks for Cardiovascular MR Segmentation in Congenital Heart Disease
We propose an automatic method using dilated convolutional neural networks
(CNNs) for segmentation of the myocardium and blood pool in cardiovascular MR
(CMR) of patients with congenital heart disease (CHD).
Ten training and ten test CMR scans cropped to an ROI around the heart were
provided in the MICCAI 2016 HVSMR challenge. A dilated CNN with a receptive
field of 131x131 voxels was trained for myocardium and blood pool segmentation
in axial, sagittal and coronal image slices. Performance was evaluated within
the HVSMR challenge.
Automatic segmentation of the test scans resulted in Dice indices of
0.800.06 and 0.930.02, average distances to boundaries of
0.960.31 and 0.890.24 mm, and Hausdorff distances of 6.133.76
and 7.073.01 mm for the myocardium and blood pool, respectively.
Segmentation took 41.514.7 s per scan.
In conclusion, dilated CNNs trained on a small set of CMR images of CHD
patients showing large anatomical variability provide accurate myocardium and
blood pool segmentations
Conditional Generation of Medical Images via Disentangled Adversarial Inference
Synthetic medical image generation has a huge potential for improving
healthcare through many applications, from data augmentation for training
machine learning systems to preserving patient privacy. Conditional Adversarial
Generative Networks (cGANs) use a conditioning factor to generate images and
have shown great success in recent years. Intuitively, the information in an
image can be divided into two parts: 1) content which is presented through the
conditioning vector and 2) style which is the undiscovered information missing
from the conditioning vector. Current practices in using cGANs for medical
image generation, only use a single variable for image generation (i.e.,
content) and therefore, do not provide much flexibility nor control over the
generated image. In this work we propose a methodology to learn from the image
itself, disentangled representations of style and content, and use this
information to impose control over the generation process. In this framework,
style is learned in a fully unsupervised manner, while content is learned
through both supervised learning (using the conditioning vector) and
unsupervised learning (with the inference mechanism). We undergo two novel
regularization steps to ensure content-style disentanglement. First, we
minimize the shared information between content and style by introducing a
novel application of the gradient reverse layer (GRL); second, we introduce a
self-supervised regularization method to further separate information in the
content and style variables. We show that in general, two latent variable
models achieve better performance and give more control over the generated
image. We also show that our proposed model (DRAI) achieves the best
disentanglement score and has the best overall performance.Comment: Published in Medical Image Analysi
- …