480 research outputs found

    Methods for the acquisition and analysis of volume electron microscopy data

    Get PDF

    How to describe a cell: a path to automated versatile characterization of cells in imaging data

    Get PDF
    A cell is the basic functional unit of life. Most ulticellular organisms, including animals, are composed of a variety of different cell types that fulfil distinct roles. Within an organism, all cells share the same genome, however, their diverse genetic programs lead them to acquire different molecular and anatomical characteristics. Describing these characteristics is essential for understanding how cellular diversity emerged and how it contributes to the organism function. Probing cellular appearance by microscopy methods is the original way of describing cell types and the main approach to characterise cellular morphology and position in the organism. Present cutting-edge microscopy techniques generate immense amounts of data, requiring efficient automated unbiased methods of analysis. Not only can such methods accelerate the process of scientific discovery, they should also facilitate large-scale systematic reproducible analysis. The necessity of processing big datasets has led to development of intricate image analysis pipelines, however, they are mostly tailored to a particular dataset and a specific research question. In this thesis I aimed to address the problem of creating more general fully-automated ways of describing cells in different imaging modalities, with a specific focus on deep neural networks as a promising solution for extracting rich general-purpose features from the analysed data. I further target the problem of integrating multiple data modalities to generate a detailed description of cells on the whole-organism level. First, on two examples of cell analysis projects, I show how using automated image analysis pipelines and neural networks in particular, can assist characterising cells in microscopy data. In the first project I analyse a movie of drosophila embryo development to elucidate the difference in myosin patterns between two populations of cells with different shape fate. In the second project I develop a pipeline for automatic cell classification in a new imaging modality to show that the quality of the data is sufficient to tell apart cell types in a volume of mouse brain cortex. Next, I present an extensive collaborative effort aimed at generating a whole-body multimodal cell atlas of a three-segmented Platynereis dumerilii worm, combining high resolution morphology and gene expression. To generate a multi-sided description of cells in the atlas I create a pipeline for assigning coherent denoised gene expression profiles, obtained from spatial gene expression maps, to cells segmented in the EM volume. Finally, as the main project of this thesis, I focus on extracting comprehensive unbiased cell morphology features from an EM volume of Platynereis dumerilii. I design a fully unsupervised neural network pipeline for extracting rich morphological representations that enable grouping cells into morphological cell classes with characteristic gene expression. I further show how such descriptors could be used to explore the morphological diversity of cells, tissues and organs in the dataset

    3D CNN methods in biomedical image segmentation

    Get PDF
    A definite trend in Biomedical Imaging is the one towards the integration of increasingly complex interpretative layers to the pure data acquisition process. One of the most interesting and looked-forward goals in the field is the automatic segmentation of objects of interest in extensive acquisition data, target that would allow Biomedical Imaging to look beyond its use as a purely assistive tool to become a cornerstone in ambitious large-scale challenges like the extensive quantitative study of the Human Brain. In 2019 Convolutional Neural Networks represent the state of the art in Biomedical Image segmentation and scientific interests from a variety of fields, spacing from automotive to natural resource exploration, converge to their development. While most of the applications of CNNs are focused on single-image segmentation, biomedical image data -being it MRI, CT-scans, Microscopy, etc- often benefits from three-dimensional volumetric expression. This work explores a reformulation of the CNN segmentation problem that is native to the 3D nature of the data, with particular interest to the applications to Fluorescence Microscopy volumetric data produced at the European Laboratories for Nonlinear Spectroscopy in the context of two different large international human brain study projects: the Human Brain Project and the White House BRAIN Initiative

    Correlative light and electron microscopy: new strategies for improved throughput and targeting precision

    Get PDF
    The need for quantitative analysis is crucial when studying fundamental mechanisms in cell biology. Common assays consist of interfering with a system via protein knockdowns or drug treatments. These very often lead to important response variability that is generally addressed by analyzing large populations. Whilst the imaging throughput in light microscopy (LM) is high enough for such large screens, electron microscopy (EM) still lags behind and is not adapted to collect large amounts of data from highly heterogeneous cell populations. Nevertheless, EM is the only technique that offers high-resolution imaging of the entire subcellular context. Correlative light and electron microscopy (CLEM) has made it possible to look at rare events or addressing heterogeneous populations. Our goal is to develop new strategies in CLEM. More specifically, we aim at automatizing the processes of screening large cell populations (living cells or pre-fixed), identifying the sub-populations of interest by LM, targeting these by EM and measuring the key components of the subcellular organization. New 3D-EM techniques like focused ion beam - scanning electron microscopy (FIB-SEM) enable a high degree of automation for the acquisition of high-resolution, full cell datasets. So far, this has only been applied to individual target volumes, often isotropic and has not been designed to acquire multiple regions of interest. The ability to acquire full cells with up to 5 nm x 5 nm x 5 nm voxel size (x, y referring to pixel size, z referring to slice thickness), leads to the accumulation of large datasets. Their analysis involves tedious manual segmentation or so far not well established automated segmentation algorithms. To enable the analysis and quantification of an extensive amount of data, we decided to explore the potential of stereology protocols in combination with automated acquisition in the FIB-SEM. Instead of isotropic datasets, a few evenly spaced sections are used to quantify subcellular structures. Our strategy therefore combines CLEM, 3D-EM and stereology to collect and analyze large amounts of cells selected based on their phenotype as visible by fluorescence microscopy. We demonstrate the power of the approach in a systematic screen of the Golgi apparatus morphology upon alteration of the expression of 10 proteins, plus negative and positive control. In parallel to this core project, we demonstrate the power of combining correlative approaches with 3D-EM for the detailed structural analysis of fundamental cell biology events during cell division and also for the understanding on complex physiological transitions in a multicellular model organism

    Review : Deep learning in electron microscopy

    Get PDF
    Deep learning is transforming most areas of science and technology, including electron microscopy. This review paper offers a practical perspective aimed at developers with limited familiarity. For context, we review popular applications of deep learning in electron microscopy. Following, we discuss hardware and software needed to get started with deep learning and interface with electron microscopes. We then review neural network components, popular architectures, and their optimization. Finally, we discuss future directions of deep learning in electron microscopy

    Machine learning of image analysis with convolutional networks and topological constraints

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 130-140).We present an approach to solving computer vision problems in which the goal is to produce a high-dimensional, pixel-based interpretation of some aspect of the underlying structure of an image. Such tasks have traditionally been categorized as ''low-level vision'' problems, and examples include image denoising, boundary detection, and motion estimation. Our approach is characterized by two main elements, both of which represent a departure from previous work. The first is a focus on convolutional networks, a machine learning strategy that operates directly on an input image with no use of hand-designed features and employs many thousands of free parameters that are learned from data. Previous work in low-level vision has been largely focused on completely hand-designed algorithms or learning methods with a hand-designed feature space. We demonstrate that a learning approach with high model complexity, but zero prior knowledge about any specific image domain, can outperform existing techniques even in the challenging area of natural image processing. We also present results that establish how convolutional networks are closely related to Markov random fields (MRFs), a popular probabilistic approach to image analysis, but can in practice can achieve significantly greater model complexity. The second aspect of our approach is the use of domain specific cost functions and learning algorithms that reflect the structured nature of certain prediction problems in image analysis.(cont.) In particular, we show how concepts from digital topology can be used in the context of boundary detection to both evaluate and optimize the high-order property of topological accuracy. We demonstrate that these techniques can significantly improve the machine learning approach and outperform state of the art boundary detection and segmentation methods. Throughout our work we maintain a special interest and focus on application of our methods to connectomics, an emerging scientific discipline that seeks high-throughput methods for recovering neural connectivity data from brains. This application requires solving low-level image analysis problems on a tera-voxel or peta-voxel scale, and therefore represents an extremely challenging and exciting arena for the development of computer vision methods.by Viren Jain.Ph.D

    A model-based method for 3D reconstruction of cerebellar parallel fibres from high-resolution electron microscope images

    Get PDF
    In order to understand how the brain works, we need to understand how its neural circuits process information. Electron microscopy remains the only imaging technique capable of providing sufficient resolution to reconstruct the dense connectivity between all neurons in a circuit. Automated electron microscopy techniques are approaching the point where usefully large circuits might be successfully imaged, but the development of automated reconstruction techniques lags far behind. No fully-automated reconstruction technique currently produces acceptably accurate reconstructions, and semi-automated approaches currently require an extreme amount of manual effort. This reconstruction bottleneck places severe limits on the size of neural circuits that can be reconstructed. Improved automated reconstruction techniques are therefore highly desired and under active development. The human brain contains ~86 billion neurons and ~80% of these are located in the cerebellum. Of these cerebellar neurons, the vast majority are granule cells. The axons of these granule cells are called parallel fibres and tend to be oriented in approximately the same direction, making 2+1D reconstruction approaches feasible. In this work we focus on the problem of reconstructing these parallel fibres and make four main contributions: (1) a model-based algorithm for reconstructing 2D parallel fibre cross-sections that achieves state of the art 2D reconstruction performance; (2) a fully-automated algorithm for reconstructing 3D parallel fibres that achieves state of the art 3D reconstruction performance; (3) a semi-automated approach for reconstructing 3D parallel fibres that significantly improves reconstruction accuracy compared to our fully-automated approach while requiring ~40 times less labelling effort than a purely manual reconstruction; (4) a "gold standard" ground truth data set for the molecular layer of the mouse cerebellum that will provide a valuable reference for the development and benchmarking of reconstruction algorithms
    • …
    corecore