1,733 research outputs found
Receptive field atlas and related CNN models
In this paper we demonstrate the potential of the cellular nonlinear/neural network paradigm (CNN) that of the analogic cellular computer architecture (called CNN Universal Machine | CNN-UM) in modeling different parts and aspects of the nervous system. The structure of the living sensory systems and the CNN share a lot of features in common: local interconnections ("receptive field architecture"), nonlinear and delayed synapses for the processing tasks, the
potentiality of feedback and using the advantages of both the analog and logic signal-processing mode. The results of more than ten years of cooperative work of many engineers and neurobiologists have been collected in an atlas: what we present here is a kind of selection from these studies emphasizing the exibility of the CNN computing: visual, tactile and auditory modalities
are concerned
3D Convolutional Neural Networks for Tumor Segmentation using Long-range 2D Context
We present an efficient deep learning approach for the challenging task of
tumor segmentation in multisequence MR images. In recent years, Convolutional
Neural Networks (CNN) have achieved state-of-the-art performances in a large
variety of recognition tasks in medical imaging. Because of the considerable
computational cost of CNNs, large volumes such as MRI are typically processed
by subvolumes, for instance slices (axial, coronal, sagittal) or small 3D
patches. In this paper we introduce a CNN-based model which efficiently
combines the advantages of the short-range 3D context and the long-range 2D
context. To overcome the limitations of specific choices of neural network
architectures, we also propose to merge outputs of several cascaded 2D-3D
models by a voxelwise voting strategy. Furthermore, we propose a network
architecture in which the different MR sequences are processed by separate
subnetworks in order to be more robust to the problem of missing MR sequences.
Finally, a simple and efficient algorithm for training large CNN models is
introduced. We evaluate our method on the public benchmark of the BRATS 2017
challenge on the task of multiclass segmentation of malignant brain tumors. Our
method achieves good performances and produces accurate segmentations with
median Dice scores of 0.918 (whole tumor), 0.883 (tumor core) and 0.854
(enhancing core). Our approach can be naturally applied to various tasks
involving segmentation of lesions or organs.Comment: Submitted to the journal Computerized Medical Imaging and Graphic
Brain Tumor Segmentation with Deep Neural Networks
In this paper, we present a fully automatic brain tumor segmentation method
based on Deep Neural Networks (DNNs). The proposed networks are tailored to
glioblastomas (both low and high grade) pictured in MR images. By their very
nature, these tumors can appear anywhere in the brain and have almost any kind
of shape, size, and contrast. These reasons motivate our exploration of a
machine learning solution that exploits a flexible, high capacity DNN while
being extremely efficient. Here, we give a description of different model
choices that we've found to be necessary for obtaining competitive performance.
We explore in particular different architectures based on Convolutional Neural
Networks (CNN), i.e. DNNs specifically adapted to image data.
We present a novel CNN architecture which differs from those traditionally
used in computer vision. Our CNN exploits both local features as well as more
global contextual features simultaneously. Also, different from most
traditional uses of CNNs, our networks use a final layer that is a
convolutional implementation of a fully connected layer which allows a 40 fold
speed up. We also describe a 2-phase training procedure that allows us to
tackle difficulties related to the imbalance of tumor labels. Finally, we
explore a cascade architecture in which the output of a basic CNN is treated as
an additional source of information for a subsequent CNN. Results reported on
the 2013 BRATS test dataset reveal that our architecture improves over the
currently published state-of-the-art while being over 30 times faster
Automatic calcium scoring in low-dose chest CT using deep neural networks with dilated convolutions
Heavy smokers undergoing screening with low-dose chest CT are affected by
cardiovascular disease as much as by lung cancer. Low-dose chest CT scans
acquired in screening enable quantification of atherosclerotic calcifications
and thus enable identification of subjects at increased cardiovascular risk.
This paper presents a method for automatic detection of coronary artery,
thoracic aorta and cardiac valve calcifications in low-dose chest CT using two
consecutive convolutional neural networks. The first network identifies and
labels potential calcifications according to their anatomical location and the
second network identifies true calcifications among the detected candidates.
This method was trained and evaluated on a set of 1744 CT scans from the
National Lung Screening Trial. To determine whether any reconstruction or only
images reconstructed with soft tissue filters can be used for calcification
detection, we evaluated the method on soft and medium/sharp filter
reconstructions separately. On soft filter reconstructions, the method achieved
F1 scores of 0.89, 0.89, 0.67, and 0.55 for coronary artery, thoracic aorta,
aortic valve and mitral valve calcifications, respectively. On sharp filter
reconstructions, the F1 scores were 0.84, 0.81, 0.64, and 0.66, respectively.
Linearly weighted kappa coefficients for risk category assignment based on per
subject coronary artery calcium were 0.91 and 0.90 for soft and sharp filter
reconstructions, respectively. These results demonstrate that the presented
method enables reliable automatic cardiovascular risk assessment in all
low-dose chest CT scans acquired for lung cancer screening
Layer-Wise Relevance Propagation for Explaining Deep Neural Network Decisions in MRI-Based Alzheimer's Disease Classification
Deep neural networks have led to state-of-the-art results in many medical imaging tasks including Alzheimerâs disease (AD) detection based on structural magnetic resonance imaging (MRI) data. However, the network decisions are often perceived as being highly non-transparent, making it difficult to apply these algorithms in clinical routine. In this study, we propose using layer-wise relevance propagation (LRP) to visualize convolutional neural network decisions for AD based on MRI data. Similarly to other visualization methods, LRP produces a heatmap in the input space indicating the importance/relevance of each voxel contributing to the final classification outcome. In contrast to susceptibility maps produced by guided backpropagation (âWhich change in voxels would change the outcome most?â), the LRP method is able to directly highlight positive contributions to the network classification in the input space. In particular, we show that (1) the LRP method is very specific for individuals (âWhy does this person have AD?â) with high inter-patient variability, (2) there is very little relevance for AD in healthy controls and (3) areas that exhibit a lot of relevance correlate well with what is known from literature. To quantify the latter, we compute size-corrected metrics of the summed relevance per brain area, e.g., relevance density or relevance gain. Although these metrics produce very individual âfingerprintsâ of relevance patterns for AD patients, a lot of importance is put on areas in the temporal lobe including the hippocampus. After discussing several limitations such as sensitivity toward the underlying model and computation parameters, we conclude that LRP might have a high potential to assist clinicians in explaining neural network decisions for diagnosing AD (and potentially other diseases) based on structural MRI data
- âŠ