407 research outputs found
Cell Segmentation in 3D Confocal Images using Supervoxel Merge-Forests with CNN-based Hypothesis Selection
Automated segmentation approaches are crucial to quantitatively analyze
large-scale 3D microscopy images. Particularly in deep tissue regions,
automatic methods still fail to provide error-free segmentations. To improve
the segmentation quality throughout imaged samples, we present a new
supervoxel-based 3D segmentation approach that outperforms current methods and
reduces the manual correction effort. The algorithm consists of gentle
preprocessing and a conservative super-voxel generation method followed by
supervoxel agglomeration based on local signal properties and a postprocessing
step to fix under-segmentation errors using a Convolutional Neural Network. We
validate the functionality of the algorithm on manually labeled 3D confocal
images of the plant Arabidopis thaliana and compare the results to a
state-of-the-art meristem segmentation algorithm.Comment: 5 pages, 3 figures, 1 tabl
Designing Deep Learning Frameworks for Plant Biology
In recent years the parallel progress in high-throughput microscopy and deep learning drastically widened the landscape of possible research avenues in life sciences.
In particular, combining high-resolution microscopic images and automated imaging pipelines powered by deep learning dramatically reduced the manual annotation work required for quantitative analysis.
In this work, we will present two deep learning frameworks tailored to the needs of life scientists in the context of plant biology.
First, we will introduce PlantSeg, a software for 2D and 3D instance segmentation. The PlantSeg pipeline contains several pre-trained models for different microscopy modalities and multiple popular graph-based instance segmentation algorithms.
In the second part, we will present CellTypeGraph, a benchmark for quantitatively evaluating graph neural networks. The benchmark is designed to test the ability of machine learning methods to classify the types of cells in an \textit{Arabidopsis thaliana} ovules. CellTypeGraph's prime aim is to give a valuable tool to the geometric learning community, but at the same time it also offers a framework for plant biologists to perform fast and accurate cell type inference on new data
Learning Instance Segmentation from Sparse Supervision
Instance segmentation is an important task in many domains of automatic image processing, such as self-driving cars, robotics and microscopy data analysis. Recently, deep learning-based algorithms have brought image segmentation close to human performance. However, most existing models rely on dense groundtruth labels for training, which are expensive, time consuming and often require experienced annotators to perform the labeling. Besides the annotation burden, training complex high-capacity neural networks depends upon non-trivial expertise in the choice and tuning of hyperparameters, making the adoption of these models challenging for researchers in other fields.
The aim of this work is twofold. The first is to make the deep learning segmentation methods accessible to non-specialist. The second is to address the dense annotation problem by developing instance segmentation methods trainable with limited groundtruth data.
In the first part of this thesis, I bring state-of-the-art instance segmentation methods closer to non-experts by developing PlantSeg: a pipeline for volumetric segmentation of light microscopy images of biological tissues into cells. PlantSeg comes with a large repository of pre-trained models and delivers highly accurate results on a variety of samples and image modalities. We exemplify its usefulness to answer biological questions in several collaborative research projects.
In the second part, I tackle the dense annotation bottleneck by introducing SPOCO, an instance segmentation method, which can be trained from just a few annotated objects. It demonstrates strong segmentation performance on challenging natural and biological benchmark datasets at a very reduced manual annotation cost and delivers state-of-the-art results on the CVPPP benchmark.
In summary, my contributions enable training of instance segmentation models with limited amounts of labeled data and make these methods more accessible for non-experts, speeding up the process of quantitative data analysis
Accurate and versatile 3D segmentation of plant tissues at cellular resolution
Quantitative analysis of plant and animal morphogenesis requires accurate segmentation of individual cells in volumetric images of growing organs. In the last years, deep learning has provided robust automated algorithms that approach human performance, with applications to bio-image analysis now starting to emerge. Here, we present PlantSeg, a pipeline for volumetric segmentation of plant tissues into cells. PlantSeg employs a convolutional neural network to predict cell boundaries and graph partitioning to segment cells based on the neural network predictions. PlantSeg was trained on fixed and live plant organs imaged with confocal and light sheet microscopes. PlantSeg delivers accurate results and generalizes well across different tissues, scales, acquisition settings even on non plant samples. We present results of PlantSeg applications in diverse developmental contexts. PlantSeg is free and open-source, with both a command line and a user-friendly graphical interface
Quantitation of Cellular Dynamics in Growing Arabidopsis Roots with Light Sheet Microscopy
To understand dynamic developmental processes, living tissues must be imaged
frequently and for extended periods of time. Root development is extensively
studied at cellular resolution to understand basic mechanisms underlying
pattern formation and maintenance in plants. Unfortunately, ensuring continuous
specimen access, while preserving physiological conditions and preventing
photo-damage, poses major barriers to measurements of cellular dynamics in
indeterminately growing organs such as plant roots. We present a system that
integrates optical sectioning through light sheet fluorescence microscopy with
hydroponic culture that enables us to image at cellular resolution a vertically
growing Arabidopsis root every few minutes and for several consecutive days. We
describe novel automated routines to track the root tip as it grows, track
cellular nuclei and identify cell divisions. We demonstrate the system's
capabilities by collecting data on divisions and nuclear dynamics.Comment: * The first two authors contributed equally to this wor
Image analysis workflows to reveal the spatial organization of cell nuclei and chromosomes
Nucleus, chromatin, and chromosome organization studies heavily rely on fluorescence microscopy imaging to elucidate the distribution and abundance of structural and regulatory components. Three-dimensional (3D) image stacks are a source of quantitative data on signal intensity level and distribution and on the type and shape of distribution patterns in space. Their analysis can lead to novel insights that are otherwise missed in qualitative-only analyses. Quantitative image analysis requires specific software and workflows for image rendering, processing, segmentation, setting measurement points and reference frames and exporting target data before further numerical processing and plotting. These tasks often call for the development of customized computational scripts and require an expertise that is not broadly available to the community of experimental biologists. Yet, the increasing accessibility of high- and super-resolution imaging methods fuels the demand for user-friendly image analysis workflows. Here, we provide a compendium of strategies developed by participants of a training school from the COST action INDEPTH to analyze the spatial distribution of nuclear and chromosomal signals from 3D image stacks, acquired by diffraction-limited confocal microscopy and super-resolution microscopy methods (SIM and STED). While the examples make use of one specific commercial software package, the workflows can easily be adapted to concurrent commercial and open-source software. The aim is to encourage biologists lacking custom-script-based expertise to venture into quantitative image analysis and to better exploit the discovery potential of their images.Abbreviations: 3D FISH: three-dimensional fluorescence in situ hybridization; 3D: three-dimensional; ASY1: ASYNAPTIC 1; CC: chromocenters; CO: Crossover; DAPI: 4',6-diamidino-2-phenylindole; DMC1: DNA MEIOTIC RECOMBINASE 1; DSB: Double-Strand Break; FISH: fluorescence in situ hybridization; GFP: GREEN FLUORESCENT PROTEIN; HEI10: HUMAN ENHANCER OF INVASION 10; NCO: Non-Crossover; NE: Nuclear Envelope; Oligo-FISH: oligonucleotide fluorescence in situ hybridization; RNPII: RNA Polymerase II; SC: Synaptonemal Complex; SIM: structured illumination microscopy; ZMM (ZIP: MSH4: MSH5 and MER3 proteins); ZYP1: ZIPPER-LIKE PROTEIN 1
Image analysis workflows to reveal the spatial organization of cell nuclei and chromosomes
Nucleus, chromatin, and chromosome organization studies heavily rely on fluorescence microscopy imaging to elucidate the distribution and abundance of structural and regulatory components. Three-dimensional (3D) image stacks are a source of quantitative data on signal intensity level and distribution and on the type and shape of distribution patterns in space. Their analysis can lead to novel insights that are otherwise missed in qualitative-only analyses. Quantitative image analysis requires specific software and workflows for image rendering, processing, segmentation, setting measurement points and reference frames and exporting target data before further numerical processing and plotting. These tasks often call for the development of customized computational scripts and require an expertise that is not broadly available to the community of experimental biologists. Yet, the increasing accessibility of high- and super-resolution imaging methods fuels the demand for user-friendly image analysis workflows. Here, we provide a compendium of strategies developed by participants of a training school from the COST action INDEPTH to analyze the spatial distribution of nuclear and chromosomal signals from 3D image stacks, acquired by diffraction-limited confocal microscopy and super-resolution microscopy methods (SIM and STED). While the examples make use of one specific commercial software package, the workflows can easily be adapted to concurrent commercial and open-source software. The aim is to encourage biologists lacking custom-script-based expertise to venture into quantitative image analysis and to better exploit the discovery potential of their images
Making microscopy count: quantitative light microscopy of dynamic processes in living plants
First published: April 2016This is the author accepted manuscript. The final version is available from the publisher via the DOI in this record.Cell theory has officially reached 350 years of age as the first use of the word âcellâ in a biological context can be traced to a description of plant material by Robert Hooke in his historic publication âMicrographia: or some physiological definitions of minute bodiesâ. The 2015 Royal Microscopical Society Botanical Microscopy meeting was a celebration of the streams of investigation initiated by Hooke to understand at the sub-cellular scale how plant cell function and form arises. Much of the work presented, and Honorary Fellowships awarded, reflected the advanced application of bioimaging informatics to extract quantitative data from micrographs that reveal dynamic molecular processes driving cell growth and physiology. The field has progressed from collecting many pixels in multiple modes to associating these measurements with objects or features that are meaningful biologically. The additional complexity involves object identification that draws on a different type of expertise from computer science and statistics that is often impenetrable to biologists. There are many useful tools and approaches being developed, but we now need more inter-disciplinary exchange to use them effectively. In this review we show how this quiet revolution has provided tools available to any personal computer user. We also discuss the oft-neglected issue of quantifying algorithm robustness and the exciting possibilities offered through the integration of physiological information generated by biosensors with object detection and tracking
- âŠ