26 research outputs found
Large Scale Image Segmentation with Structured Loss based Deep Learning for Connectome Reconstruction
We present a method combining affinity prediction with region agglomeration,
which improves significantly upon the state of the art of neuron segmentation
from electron microscopy (EM) in accuracy and scalability. Our method consists
of a 3D U-NET, trained to predict affinities between voxels, followed by
iterative region agglomeration. We train using a structured loss based on
MALIS, encouraging topologically correct segmentations obtained from affinity
thresholding. Our extension consists of two parts: First, we present a
quasi-linear method to compute the loss gradient, improving over the original
quadratic algorithm. Second, we compute the gradient in two separate passes to
avoid spurious gradient contributions in early training stages. Our predictions
are accurate enough that simple learning-free percentile-based agglomeration
outperforms more involved methods used earlier on inferior predictions. We
present results on three diverse EM datasets, achieving relative improvements
over previous results of 27%, 15%, and 250%. Our findings suggest that a single
method can be applied to both nearly isotropic block-face EM data and
anisotropic serial sectioned EM data. The runtime of our method scales linearly
with the size of the volume and achieves a throughput of about 2.6 seconds per
megavoxel, qualifying our method for the processing of very large datasets
Synaptic partner prediction from point annotations in insect brains
High-throughput electron microscopy allows recording of lar- ge stacks of
neural tissue with sufficient resolution to extract the wiring diagram of the
underlying neural network. Current efforts to automate this process focus
mainly on the segmentation of neurons. However, in order to recover a wiring
diagram, synaptic partners need to be identi- fied as well. This is especially
challenging in insect brains like Drosophila melanogaster, where one
presynaptic site is associated with multiple post- synaptic elements. Here we
propose a 3D U-Net architecture to directly identify pairs of voxels that are
pre- and postsynaptic to each other. To that end, we formulate the problem of
synaptic partner identification as a classification problem on long-range edges
between voxels to encode both the presence of a synaptic pair and its
direction. This formulation allows us to directly learn from synaptic point
annotations instead of more ex- pensive voxel-based synaptic cleft or vesicle
annotations. We evaluate our method on the MICCAI 2016 CREMI challenge and
improve over the current state of the art, producing 3% fewer errors than the
next best method
New foreground markers for Drosophila cell segmentation using marker-controlled watershed
Image segmentation consists of partitioning the image into different objects of interest. For a biological image, the segmentation step is important to understand the biological process. However, it is a challenging task due to the presence of different dimensions for cells, intensity inhomogeneity, and clustered cells. The marker-controlled watershed (MCW) is proposed for segmentation, outperforming the classical watershed. Besides, the choice of markers for this algorithm is important and impacts the results. For this work, two foreground markers are proposed: kernels, constructed with the software Fiji and Obj.MPP markers, constructed with the framework Obj.MPP. The new proposed algorithms are compared to the basic MCW. Furthermore, we prove that Obj.MPP markers are better than kernels. Indeed, the Obj.MPP framework takes into account cell properties such as shape, radiometry, and local contrast. Segmentation results, using new markers and illustrated on real Drosophila dataset, confirm the good performance quality in terms of quantitative and qualitative evaluation
WPU-Net: Boundary Learning by Using Weighted Propagation in Convolution Network
Deep learning has driven a great progress in natural and biological image
processing. However, in material science and engineering, there are often some
flaws and indistinctions in material microscopic images induced from complex
sample preparation, even due to the material itself, hindering the detection of
target objects. In this work, we propose WPU-net that redesigns the
architecture and weighted loss of U-Net, which forces the network to integrate
information from adjacent slices and pays more attention to the topology in
boundary detection task. Then, the WPU-net is applied into a typical material
example, i.e., the grain boundary detection of polycrystalline material.
Experiments demonstrate that the proposed method achieves promising performance
and outperforms state-of-the-art methods. Besides, we propose a new method for
object tracking between adjacent slices, which can effectively reconstruct 3D
structure of the whole material. Finally, we present a material microscopic
image dataset with the goal of advancing the state-of-the-art in image
processing for material science.Comment: technical repor
Convolutional nets for reconstructing neural circuits from brain images acquired by serial section electron microscopy
Neural circuits can be reconstructed from brain images acquired by serial
section electron microscopy. Image analysis has been performed by manual labor
for half a century, and efforts at automation date back almost as far.
Convolutional nets were first applied to neuronal boundary detection a dozen
years ago, and have now achieved impressive accuracy on clean images. Robust
handling of image defects is a major outstanding challenge. Convolutional nets
are also being employed for other tasks in neural circuit reconstruction:
finding synapses and identifying synaptic partners, extending or pruning
neuronal reconstructions, and aligning serial section images to create a 3D
image stack. Computational systems are being engineered to handle petavoxel
images of cubic millimeter brain volumes