29,648 research outputs found
DeepACSON automated segmentation of white matter in 3D electron microscopy
Tracing the entirety of ultrastructures in large three-dimensional electron microscopy (3D-EM) images of the brain tissue requires automated segmentation techniques. Current segmentation techniques use deep convolutional neural networks (DCNNs) and rely on high-contrast cellular membranes and high-resolution EM volumes. On the other hand, segmenting low-resolution, large EM volumes requires methods to account for severe membrane discontinuities inescapable. Therefore, we developed DeepACSON, which performs DCNN-based semantic segmentation and shape-decomposition-based instance segmentation. DeepACSON instance segmentation uses the tubularity of myelinated axons and decomposes under-segmented myelinated axons into their constituent axons. We applied DeepACSON to ten EM volumes of rats after sham-operation or traumatic brain injury, segmenting hundreds of thousands of long-span myelinated axons, thousands of cell nuclei, and millions of mitochondria with excellent evaluation scores. DeepACSON quantified the morphology and spatial aspects of white matter ultrastructures, capturing nanoscopic morphological alterations five months after the injury. With DeepACSON, Abdollahzadeh et al. combines existing deep learning-based methods for semantic segmentation and a novel shape decomposition technique for the instance segmentation. The pipeline is used to segment low-resolution 3D-EM datasets allowing quantification of white matter morphology in large fields-of-view.Peer reviewe
Fast ConvNets Using Group-wise Brain Damage
We revisit the idea of brain damage, i.e. the pruning of the coefficients of
a neural network, and suggest how brain damage can be modified and used to
speedup convolutional layers. The approach uses the fact that many efficient
implementations reduce generalized convolutions to matrix multiplications. The
suggested brain damage process prunes the convolutional kernel tensor in a
group-wise fashion by adding group-sparsity regularization to the standard
training process. After such group-wise pruning, convolutions can be reduced to
multiplications of thinned dense matrices, which leads to speedup. In the
comparison on AlexNet, the method achieves very competitive performance
Visual pathways from the perspective of cost functions and multi-task deep neural networks
Vision research has been shaped by the seminal insight that we can understand
the higher-tier visual cortex from the perspective of multiple functional
pathways with different goals. In this paper, we try to give a computational
account of the functional organization of this system by reasoning from the
perspective of multi-task deep neural networks. Machine learning has shown that
tasks become easier to solve when they are decomposed into subtasks with their
own cost function. We hypothesize that the visual system optimizes multiple
cost functions of unrelated tasks and this causes the emergence of a ventral
pathway dedicated to vision for perception, and a dorsal pathway dedicated to
vision for action. To evaluate the functional organization in multi-task deep
neural networks, we propose a method that measures the contribution of a unit
towards each task, applying it to two networks that have been trained on either
two related or two unrelated tasks, using an identical stimulus set. Results
show that the network trained on the unrelated tasks shows a decreasing degree
of feature representation sharing towards higher-tier layers while the network
trained on related tasks uniformly shows high degree of sharing. We conjecture
that the method we propose can be used to analyze the anatomical and functional
organization of the visual system and beyond. We predict that the degree to
which tasks are related is a good descriptor of the degree to which they share
downstream cortical-units.Comment: 16 pages, 5 figure
Methods for Interpreting and Understanding Deep Neural Networks
This paper provides an entry point to the problem of interpreting a deep
neural network model and explaining its predictions. It is based on a tutorial
given at ICASSP 2017. It introduces some recently proposed techniques of
interpretation, along with theory, tricks and recommendations, to make most
efficient use of these techniques on real data. It also discusses a number of
practical applications.Comment: 14 pages, 10 figure
- …