807 research outputs found
Annotating Synapses in Large EM Datasets
Reconstructing neuronal circuits at the level of synapses is a central
problem in neuroscience and becoming a focus of the emerging field of
connectomics. To date, electron microscopy (EM) is the most proven technique
for identifying and quantifying synaptic connections. As advances in EM make
acquiring larger datasets possible, subsequent manual synapse identification
({\em i.e.}, proofreading) for deciphering a connectome becomes a major time
bottleneck. Here we introduce a large-scale, high-throughput, and
semi-automated methodology to efficiently identify synapses. We successfully
applied our methodology to the Drosophila medulla optic lobe, annotating many
more synapses than previous connectome efforts. Our approaches are extensible
and will make the often complicated process of synapse identification
accessible to a wider-community of potential proofreaders
Focused Proofreading: Efficiently Extracting Connectomes from Segmented EM Images
Identifying complex neural circuitry from electron microscopic (EM) images
may help unlock the mysteries of the brain. However, identifying this circuitry
requires time-consuming, manual tracing (proofreading) due to the size and
intricacy of these image datasets, thus limiting state-of-the-art analysis to
very small brain regions. Potential avenues to improve scalability include
automatic image segmentation and crowd sourcing, but current efforts have had
limited success. In this paper, we propose a new strategy, focused
proofreading, that works with automatic segmentation and aims to limit
proofreading to the regions of a dataset that are most impactful to the
resulting circuit. We then introduce a novel workflow, which exploits
biological information such as synapses, and apply it to a large dataset in the
fly optic lobe. With our techniques, we achieve significant tracing speedups of
3-5x without sacrificing the quality of the resulting circuit. Furthermore, our
methodology makes the task of proofreading much more accessible and hence
potentially enhances the effectiveness of crowd sourcing
Dimensionality reduction and unsupervised learning techniques applied to clinical psychiatric and neuroimaging phenotypes
Unsupervised learning and other multivariate analysis techniques are increasingly recognized in neuropsychiatric research. Here, finite mixture models and random forests were applied to clinical observations of patients with major depression to detect and validate treatment response subgroups. Further, independent component analysis and agglomerative hierarchical clustering were combined to build a brain parcellation solely on structural covariance information of magnetic resonance brain images. Übersetzte Kurzfassung: Unüberwachtes Lernen und andere multivariate Analyseverfahren werden zunehmend auf neuropsychiatrische Fragestellungen angewendet. Finite mixture Modelle wurden auf klinische Skalen von Patienten mit schwerer Depression appliziert, um Therapieantwortklassen zu bilden und mit Random Forests zu validieren. Unabhängigkeitsanalysen und agglomeratives hierarchisches Clustering wurden kombiniert, um die strukturelle Kovarianz von Magnetresonanztomographie-Bildern für eine Hirnparzellierung zu nutzen
Complementary platforms.
We introduce an analytical framework close to the canonical model of platform competition investigated by Rochet and Tirole (2006) to study pricing decisions in two-sided markets when two or more platforms are needed simultaneously for the successful completion of a transaction. The model developed is a natural extension of the Cournot-Ellet theory of complementary monopoly featuring clear cut asymmetric single- and multihoming patterns across the market. The results indicate that the so-called anticommons problem generalizes to two-sided markets because individual platforms do not take into account the negative pricing externality they exert on the other platforms. As a result, mergers between such platforms may be welfare enhancing, but involve redistribution of surplus from one side of the market to the other. Moreover, the limit of an atomistic allocation of property rights however is not monopoly pricing, indicating that there also exist differences with the received theory of complementarity.Competition; Industries; Industry;
Analyzing Image Segmentation for Connectomics
Automatic image segmentation is critical to scale up electron microscope (EM) connectome reconstruction. To this end, segmentation competitions, such as CREMI and SNEMI, exist to help researchers evaluate segmentation algorithms with the goal of improving them. Because generating ground truth is time-consuming, these competitions often fail to capture the challenges in segmenting larger datasets required in connectomics. More generally, the common metrics for EM image segmentation do not emphasize impact on downstream analysis and are often not very useful for isolating problem areas in the segmentation. For example, they do not capture connectivity information and often over-rate the quality of a segmentation as we demonstrate later. To address these issues, we introduce a novel strategy to enable evaluation of segmentation at large scales both in a supervised setting, where ground truth is available, or an unsupervised setting. To achieve this, we first introduce new metrics more closely aligned with the use of segmentation in downstream analysis and reconstruction. In particular, these include synapse connectivity and completeness metrics that provide both meaningful and intuitive interpretations of segmentation quality as it relates to the preservation of neuron connectivity. Also, we propose measures of segmentation correctness and completeness with respect to the percentage of “orphan” fragments and the concentrations of self-loops formed by segmentation failures, which are helpful in analysis and can be computed without ground truth. The introduction of new metrics intended to be used for practical applications involving large datasets necessitates a scalable software ecosystem, which is a critical contribution of this paper. To this end, we introduce a scalable, flexible software framework that enables integration of several different metrics and provides mechanisms to evaluate and debug differences between segmentations. We also introduce visualization software to help users to consume the various metrics collected. We evaluate our framework on two relatively large public groundtruth datasets providing novel insights on example segmentations
- …