246 research outputs found

    Methods for the acquisition and analysis of volume electron microscopy data

    Get PDF

    Learning cellular morphology with neural networks

    Get PDF
    Reconstruction and annotation of volume electron microscopy data sets of brain tissue is challenging but can reveal invaluable information about neuronal circuits. Significant progress has recently been made in automated neuron reconstruction as well as automated detection of synapses. However, methods for automating the morphological analysis of nanometer-resolution reconstructions are less established, despite the diversity of possible applications. Here, we introduce cellular morphology neural networks (CMNs), based on multi-view projections sampled from automatically reconstructed cellular fragments of arbitrary size and shape. Using unsupervised training, we infer morphology embeddings (Neuron2vec) of neuron reconstructions and train CMNs to identify glia cells in a supervised classification paradigm, which are then used to resolve neuron reconstruction errors. Finally, we demonstrate that CMNs can be used to identify subcellular compartments and the cell types of neuron reconstructions

    Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement Learning

    Full text link
    The performance of existing supervised neuron segmentation methods is highly dependent on the number of accurate annotations, especially when applied to large scale electron microscopy (EM) data. By extracting semantic information from unlabeled data, self-supervised methods can improve the performance of downstream tasks, among which the mask image model (MIM) has been widely used due to its simplicity and effectiveness in recovering original information from masked images. However, due to the high degree of structural locality in EM images, as well as the existence of considerable noise, many voxels contain little discriminative information, making MIM pretraining inefficient on the neuron segmentation task. To overcome this challenge, we propose a decision-based MIM that utilizes reinforcement learning (RL) to automatically search for optimal image masking ratio and masking strategy. Due to the vast exploration space, using single-agent RL for voxel prediction is impractical. Therefore, we treat each input patch as an agent with a shared behavior policy, allowing for multi-agent collaboration. Furthermore, this multi-agent model can capture dependencies between voxels, which is beneficial for the downstream segmentation task. Experiments conducted on representative EM datasets demonstrate that our approach has a significant advantage over alternative self-supervised methods on the task of neuron segmentation. Code is available at \url{https://github.com/ydchen0806/dbMiM}.Comment: IJCAI 23 main track pape

    Investigating human-perceptual properties of "shapes" using 3D shapes and 2D fonts

    Get PDF
    Shapes are generally used to convey meaning. They are used in video games, films and other multimedia, in diverse ways. 3D shapes may be destined for virtual scenes or represent objects to be constructed in the real-world. Fonts add character to an otherwise plain block of text, allowing the writer to make important points more visually prominent or distinct from other text. They can indicate the structure of a document, at a glance. Rather than studying shapes through traditional geometric shape descriptors, we provide alternative methods to describe and analyse shapes, from a lens of human perception. This is done via the concepts of Schelling Points and Image Specificity. Schelling Points are choices people make when they aim to match with what they expect others to choose but cannot communicate with others to determine an answer. We study whole mesh selections in this setting, where Schelling Meshes are the most frequently selected shapes. The key idea behind image Specificity is that different images evoke different descriptions; but ‘Specific’ images yield more consistent descriptions than others. We apply Specificity to 2D fonts. We show that each concept can be learned and predict them for fonts and 3D shapes, respectively, using a depth image-based convolutional neural network. Results are shown for a range of fonts and 3D shapes and we demonstrate that font Specificity and the Schelling meshes concept are useful for visualisation, clustering, and search applications. Overall, we find that each concept represents similarities between their respective type of shape, even when there are discontinuities between the shape geometries themselves. The ‘context’ of these similarities is in some kind of abstract or subjective meaning which is consistent among different people

    Point-Voxel Absorbing Graph Representation Learning for Event Stream based Recognition

    Full text link
    Considering the balance of performance and efficiency, sampled point and voxel methods are usually employed to down-sample dense events into sparse ones. After that, one popular way is to leverage a graph model which treats the sparse points/voxels as nodes and adopts graph neural networks (GNNs) to learn the representation for event data. Although good performance can be obtained, however, their results are still limited mainly due to two issues. (1) Existing event GNNs generally adopt the additional max (or mean) pooling layer to summarize all node embeddings into a single graph-level representation for the whole event data representation. However, this approach fails to capture the importance of graph nodes and also fails to be fully aware of the node representations. (2) Existing methods generally employ either a sparse point or voxel graph representation model which thus lacks consideration of the complementary between these two types of representation models. To address these issues, in this paper, we propose a novel dual point-voxel absorbing graph representation learning for event stream data representation. To be specific, given the input event stream, we first transform it into the sparse event cloud and voxel grids and build dual absorbing graph models for them respectively. Then, we design a novel absorbing graph convolutional network (AGCN) for our dual absorbing graph representation and learning. The key aspect of the proposed AGCN is its ability to effectively capture the importance of nodes and thus be fully aware of node representations in summarizing all node representations through the introduced absorbing nodes. Finally, the event representations of dual learning branches are concatenated together to extract the complementary information of two cues. The output is then fed into a linear layer for event data classification

    Machine Learning Methods for Brain Image Analysis

    Get PDF
    Understanding how the brain functions and quantifying compound interactions between complex synaptic networks inside the brain remain some of the most challenging problems in neuroscience. Lack or abundance of data, shortage of manpower along with heterogeneity of data following from various species all served as an added complexity to the already perplexing problem. The ability to process vast amount of brain data need to be performed automatically, yet with an accuracy close to manual human-level performance. These automated methods essentially need to generalize well to be able to accommodate data from different species. Also, novel approaches and techniques are becoming a necessity to reveal the correlations between different data modalities in the brain at the global level. In this dissertation, I mainly focus on two problems: automatic segmentation of brain electron microscopy (EM) images and stacks, and integrative analysis of the gene expression and synaptic connectivity in the brain. I propose to use deep learning algorithms for the 2D segmentation of EM images. I designed an automated pipeline with novel insights that was able to achieve state-of-the-art performance on the segmentation of the \textit{Drosophila} brain. I also propose a novel technique for 3D segmentation of EM image stacks that can be trained end-to-end with no prior knowledge of the data. This technique was evaluated in an ongoing online challenge for 3D segmentation of neurites where it achieved accuracy close to a second human observer. Later, I employed ensemble learning methods to perform the first systematic integrative analysis of the genome and connectome in the mouse brain at both the regional- and voxel-level. I show that the connectivity signals can be predicted from the gene expression signatures with an extremely high accuracy. Furthermore, I show that only a certain fraction of genes are responsible for this predictive aspect. Rich functional and cellular analysis of these genes are detailed to validate these findings

    Potential Synaptic Connectivity of Different Neurons onto Pyramidal Cells in a 3D Reconstruction of the Rat Hippocampus

    Get PDF
    Most existing connectomic data and ongoing efforts focus either on individual synapses (e.g., with electron microscopy) or on regional connectivity (tract tracing). An individual pyramidal cell (PC) extends thousands of synapses over macroscopic distances (∼cm). The contrasting requirements of high-resolution and large field of view make it too challenging to acquire the entire synaptic connectivity for even a single typical cortical neuron. Light microscopy can image whole neuronal arbors and resolve dendritic branches. Analyzing connectivity in terms of close spatial appositions between axons and dendrites could thus bridge the opposite scales, from synaptic level to whole systems. In the mammalian cortex, structural plasticity of spines and boutons makes these “potential synapses” functionally relevant to learning capability and memory capacity. To date, however, potential synapses have only been mapped in the surrounding of a neuron and relative to its local orientation rather than in a system-level anatomical reference. Here we overcome this limitation by estimating the potential connectivity of different neurons embedded into a detailed 3D reconstruction of the rat hippocampus. Axonal and dendritic trees were oriented with respect to hippocampal cytoarchitecture according to longitudinal and transversal curvatures. We report the potential connectivity onto PC dendrites from the axons of a dentate granule cell, three CA3 PCs, one CA2 PC, and 13 CA3b interneurons. The numbers, densities, and distributions of potential synapses were analyzed in each sub-region (e.g., CA3 vs. CA1), layer (e.g., oriens vs. radiatum), and septo-temporal location (e.g., dorsal vs. ventral). The overall ratio between the numbers of actual and potential synapses was ∼0.20 for the granule and CA3 PCs. All potential connectivity patterns are strikingly dependent on the anatomical location of both pre-synaptic and post-synaptic neurons

    A Generalized Framework for Agglomerative Clustering of Signed Graphs applied to Instance Segmentation

    Full text link
    We propose a novel theoretical framework that generalizes algorithms for hierarchical agglomerative clustering to weighted graphs with both attractive and repulsive interactions between the nodes. This framework defines GASP, a Generalized Algorithm for Signed graph Partitioning, and allows us to explore many combinations of different linkage criteria and cannot-link constraints. We prove the equivalence of existing clustering methods to some of those combinations, and introduce new algorithms for combinations which have not been studied. An extensive comparison is performed to evaluate properties of the clustering algorithms in the context of instance segmentation in images, including robustness to noise and efficiency. We show how one of the new algorithms proposed in our framework outperforms all previously known agglomerative methods for signed graphs, both on the competitive CREMI 2016 EM segmentation benchmark and on the CityScapes dataset.Comment: 19 pages, 8 figures, 6 table
    corecore