4,497 research outputs found
Fully-Automatic Synapse Prediction and Validation on a Large Data Set
Extracting a connectome from an electron microscopy (EM) data set requires identification of neurons and determination of connections (synapses) between neurons. As manual extraction of this information is very time-consuming, there has been extensive research efforts to automatically segment the neurons to help guide and eventually replace manual tracing. Until recently, there has been comparatively little research on automatic detection of the actual synapses between neurons. This discrepancy can, in part, be attributed to several factors: obtaining neuronal shapes is a prerequisite for the first step in extracting a connectome, manual tracing is much more time-consuming than annotating synapses, and neuronal contact area can be used as a proxy for synapses in determining connections. However, recent research has demonstrated that contact area alone is not a sufficient predictor of a synaptic connection. Moreover, as segmentation improved, we observed that synapse annotation consumes a more significant fraction of overall reconstruction time (upwards of 50% of total effort). This ratio will only get worse as segmentation improves, gating the overall possible speed-up. Therefore, we address this problem by developing algorithms that automatically detect presynaptic neurons and their postsynaptic partners. In particular, presynaptic structures are detected using a U-Net convolutional neural network (CNN), and postsynaptic partners are detected using a multilayer perceptron (MLP) with features conditioned on the local segmentation. This work is novel because it requires minimal amount of training, leverages advances in image segmentation directly, and provides a complete solution for polyadic synapse detection. We further introduce novel metrics to evaluate our algorithm on connectomes of meaningful size. When applied to the output of our method on EM data from Drosphila, these metrics demonstrate that a completely automatic prediction can be used to effectively characterize most of the connectivity correctly
An Alarm System For Segmentation Algorithm Based On Shape Model
It is usually hard for a learning system to predict correctly on rare events
that never occur in the training data, and there is no exception for
segmentation algorithms. Meanwhile, manual inspection of each case to locate
the failures becomes infeasible due to the trend of large data scale and
limited human resource. Therefore, we build an alarm system that will set off
alerts when the segmentation result is possibly unsatisfactory, assuming no
corresponding ground truth mask is provided. One plausible solution is to
project the segmentation results into a low dimensional feature space; then
learn classifiers/regressors to predict their qualities. Motivated by this, in
this paper, we learn a feature space using the shape information which is a
strong prior shared among different datasets and robust to the appearance
variation of input data.The shape feature is captured using a Variational
Auto-Encoder (VAE) network that trained with only the ground truth masks.
During testing, the segmentation results with bad shapes shall not fit the
shape prior well, resulting in large loss values. Thus, the VAE is able to
evaluate the quality of segmentation result on unseen data, without using
ground truth. Finally, we learn a regressor in the one-dimensional feature
space to predict the qualities of segmentation results. Our alarm system is
evaluated on several recent state-of-art segmentation algorithms for 3D medical
segmentation tasks. Compared with other standard quality assessment methods,
our system consistently provides more reliable prediction on the qualities of
segmentation results.Comment: Accepted to ICCV 2019 (10 pages, 4 figures
Synaptic partner prediction from point annotations in insect brains
High-throughput electron microscopy allows recording of lar- ge stacks of
neural tissue with sufficient resolution to extract the wiring diagram of the
underlying neural network. Current efforts to automate this process focus
mainly on the segmentation of neurons. However, in order to recover a wiring
diagram, synaptic partners need to be identi- fied as well. This is especially
challenging in insect brains like Drosophila melanogaster, where one
presynaptic site is associated with multiple post- synaptic elements. Here we
propose a 3D U-Net architecture to directly identify pairs of voxels that are
pre- and postsynaptic to each other. To that end, we formulate the problem of
synaptic partner identification as a classification problem on long-range edges
between voxels to encode both the presence of a synaptic pair and its
direction. This formulation allows us to directly learn from synaptic point
annotations instead of more ex- pensive voxel-based synaptic cleft or vesicle
annotations. We evaluate our method on the MICCAI 2016 CREMI challenge and
improve over the current state of the art, producing 3% fewer errors than the
next best method
Synaptic Partner Assignment Using Attentional Voxel Association Networks
Connectomics aims to recover a complete set of synaptic connections within a
dataset imaged by volume electron microscopy. Many systems have been proposed
for locating synapses, and recent research has included a way to identify the
synaptic partners that communicate at a synaptic cleft. We re-frame the problem
of identifying synaptic partners as directly generating the mask of the
synaptic partners from a given cleft. We train a convolutional network to
perform this task. The network takes the local image context and a binary mask
representing a single cleft as input. It is trained to produce two binary
output masks: one which labels the voxels of the presynaptic partner within the
input image, and another similar labeling for the postsynaptic partner. The
cleft mask acts as an attentional gating signal for the network. We find that
an implementation of this approach performs well on a dataset of mouse
somatosensory cortex, and evaluate it as part of a combined system to predict
both clefts and connections
Analytic Performance Modeling and Analysis of Detailed Neuron Simulations
Big science initiatives are trying to reconstruct and model the brain by
attempting to simulate brain tissue at larger scales and with increasingly more
biological detail than previously thought possible. The exponential growth of
parallel computer performance has been supporting these developments, and at
the same time maintainers of neuroscientific simulation code have strived to
optimally and efficiently exploit new hardware features. Current state of the
art software for the simulation of biological networks has so far been
developed using performance engineering practices, but a thorough analysis and
modeling of the computational and performance characteristics, especially in
the case of morphologically detailed neuron simulations, is lacking. Other
computational sciences have successfully used analytic performance engineering
and modeling methods to gain insight on the computational properties of
simulation kernels, aid developers in performance optimizations and eventually
drive co-design efforts, but to our knowledge a model-based performance
analysis of neuron simulations has not yet been conducted.
We present a detailed study of the shared-memory performance of
morphologically detailed neuron simulations based on the Execution-Cache-Memory
(ECM) performance model. We demonstrate that this model can deliver accurate
predictions of the runtime of almost all the kernels that constitute the neuron
models under investigation. The gained insight is used to identify the main
governing mechanisms underlying performance bottlenecks in the simulation. The
implications of this analysis on the optimization of neural simulation software
and eventually co-design of future hardware architectures are discussed. In
this sense, our work represents a valuable conceptual and quantitative
contribution to understanding the performance properties of biological networks
simulations.Comment: 18 pages, 6 figures, 15 table
- …