2,541 research outputs found

    Millisecond single-molecule localization microscopy combined with convolution analysis and automated image segmentation to determine protein concentrations in complexly structured, functional cells, one cell at a time

    Get PDF
    We present a single-molecule tool called the CoPro (Concentration of Proteins) method that uses millisecond imaging with convolution analysis, automated image segmentation and super-resolution localization microscopy to generate robust estimates for protein concentration in different compartments of single living cells, validated using realistic simulations of complex multiple compartment cell types. We demonstrates its utility experimentally on model Escherichia coli bacteria and Saccharomyces cerevisiae budding yeast cells, and use it to address the biological question of how signals are transduced in cells. Cells in all domains of life dynamically sense their environment through signal transduction mechanisms, many involving gene regulation. The glucose sensing mechanism of S. cerevisiae is a model system for studying gene regulatory signal transduction. It uses the multi-copy expression inhibitor of the GAL gene family, Mig1, to repress unwanted genes in the presence of elevated extracellular glucose concentrations. We fluorescently labelled Mig1 molecules with green fluorescent protein (GFP) via chromosomal integration at physiological expression levels in living S. cerevisiae cells, in addition to the RNA polymerase protein Nrd1 with the fluorescent protein reporter mCherry. Using CoPro we make quantitative estimates of Mig1 and Nrd1 protein concentrations in the cytoplasm and nucleus compartments on a cell-by-cell basis under physiological conditions. These estimates indicate a 4-fold shift towards higher values in concentration of diffusive Mig1 in the nucleus if the external glucose concentration is raised, whereas equivalent levels in the cytoplasm shift to smaller values with a relative change an order of magnitude smaller. This compares with Nrd1 which is not involved directly in glucose sensing, which is almost exclusively localized in the nucleus under high and..

    Accurate cell segmentation in microscopy images using membrane patterns

    Get PDF
    Motivation: Identifying cells in an image (cell segmentation) is essential for quantitative single-cell biology via optical microscopy. Although a plethora of segmentation methods exists, accurate segmentation is challenging and usually requires problem-specific tailoring of algorithms. In addition, most current segmentation algorithms rely on a few basic approaches that use the gradient field of the image to detect cell boundaries. However, many microscopy protocols can generate images with characteristic intensity profiles at the cell membrane. This has not yet been algorithmically exploited to establish more general segmentation methods. Results: We present an automatic cell segmentation method that decodes the information across the cell membrane and guarantees optimal detection of the cell boundaries on a per-cell basis. Graph cuts account for the information of the cell boundaries through directional cross-correlations, and they automatically incorporate spatial constraints. The method accurately segments images of various cell types grown in dense cultures that are acquired with different microscopy techniques. In quantitative benchmarks and comparisons with established methods on synthetic and real images, we demonstrate significantly improved segmentation performance despite cell-shape irregularity, cell-to-cell variability and image noise. As a proof of concept, we monitor the internalization of green fluorescent protein-tagged plasma membrane transporters in single yeast cells. Availability and implementation: Matlab code and examples are available at http://www.csb.ethz.ch/tools/cellSegmPackage.zip. Contact: [email protected] or [email protected] Supplementary information: Supplementary data are available at Bioinformatics onlin

    Bright Field Microscopy as an Alternative to Whole Cell Fluorescence in Automated Analysis of Macrophage Images

    Get PDF
    Fluorescence microscopy is the standard tool for detection and analysis of cellular phenomena. This technique, however, has a number of drawbacks such as the limited number of available fluorescent channels in microscopes, overlapping excitation and emission spectra of the stains, and phototoxicity.We here present and validate a method to automatically detect cell population outlines directly from bright field images. By imaging samples with several focus levels forming a bright field -stack, and by measuring the intensity variations of this stack over the -dimension, we construct a new two dimensional projection image of increased contrast. With additional information for locations of each cell, such as stained nuclei, this bright field projection image can be used instead of whole cell fluorescence to locate borders of individual cells, separating touching cells, and enabling single cell analysis. Using the popular CellProfiler freeware cell image analysis software mainly targeted for fluorescence microscopy, we validate our method by automatically segmenting low contrast and rather complex shaped murine macrophage cells.The proposed approach frees up a fluorescence channel, which can be used for subcellular studies. It also facilitates cell shape measurement in experiments where whole cell fluorescent staining is either not available, or is dependent on a particular experimental condition. We show that whole cell area detection results using our projected bright field images match closely to the standard approach where cell areas are localized using fluorescence, and conclude that the high contrast bright field projection image can directly replace one fluorescent channel in whole cell quantification. Matlab code for calculating the projections can be downloaded from the supplementary site: http://sites.google.com/site/brightfieldorstaining

    MAARS: a novel high-content acquisition software for the analysis of mitotic defects in fission yeast

    Get PDF
    Faithful segregation of chromosomes during cell division relies on multiple processes such as chromosome attachment and correct spindle positioning. Yet mitotic progression is defined by multiple parameters, which need to be quantitatively evaluated. To study the spatiotemporal control of mitotic progression, we developed a high-content analysis (HCA) approach that combines automated fluorescence microscopy with real-time quantitative image analysis and allows the unbiased acquisition of multiparametric data at the single-cell level for hundreds of cells simultaneously. The Mitotic Analysis and Recording System (MAARS) provides automatic and quantitative single-cell analysis of mitotic progression on an open-source platform. It can be used to analyze specific characteristics such as cell shape, cell size, metaphase/anaphase delays, and mitotic abnormalities including spindle mispositioning, spindle elongation defects, and chromosome segregation defects. Using this HCA approach, we were able to visualize rare and unexpected events of error correction during anaphase in wild-type or mutant cells. Our study illustrates that such an expert system of mitotic progression is able to highlight the complexity of the mechanisms required to prevent chromosome loss during cell division

    A fully automated end-to-end process for fluorescence microscopy images of yeast cells:From segmentation to detection and classification

    Get PDF
    In recent years, an enormous amount of fluorescence microscopy images were collected in high-throughput lab settings. Analyzing and extracting relevant information from all images in a short time is almost impossible. Detecting tiny individual cell compartments is one of many challenges faced by biologists. This paper aims at solving this problem by building an end-to-end process that employs methods from the deep learning field to automatically segment, detect and classify cell compartments of fluorescence microscopy images of yeast cells. With this intention we used Mask R-CNN to automatically segment and label a large amount of yeast cell data, and YOLOv4 to automatically detect and classify individual yeast cell compartments from these images. This fully automated end-to-end process is intended to be integrated into an interactive e-Science server in the PerICo1 project, which can be used by biologists with minimized human effort in training and operation to complete their various classification tasks. In addition, we evaluated the detection and classification performance of state-of-the-art YOLOv4 on data from the NOP1pr-GFP-SWAT yeast-cell data library. Experimental results show that by dividing original images into 4 quadrants YOLOv4 outputs good detection and classification results with an F1-score of 98% in terms of accuracy and speed, which is optimally suited for the native resolution of the microscope and current GPU memory sizes. Although the application domain is optical microscopy in yeast cells, the method is also applicable to multiple-cell images in medical application

    A convolutional neural network for segmentation of yeast cells without manual training annotations

    Get PDF
    MOTIVATION: Single-cell time-lapse microscopy is a ubiquitous tool for studying the dynamics of complex cellular processes. While imaging can be automated to generate very large volumes of data, the processing of the resulting movies to extract high-quality single-cell information remains a challenging task. The development of software tools that automatically identify and track cells is essential for realizing the full potential of time-lapse microscopy data. Convolutional neural networks (CNNs) are ideally suited for such applications, but require great amounts of manually annotated data for training, a time-consuming and tedious process. RESULTS: We developed a new approach to CNN training for yeast cell segmentation based on synthetic data and present (i) a software tool for the generation of synthetic images mimicking brightfield images of budding yeast cells and (ii) a convolutional neural network (Mask R-CNN) for yeast segmentation that was trained on a fully synthetic dataset. The Mask R-CNN performed excellently on segmenting actual microscopy images of budding yeast cells, and a density-based spatial clustering algorithm (DBSCAN) was able to track the detected cells across the frames of microscopy movies. Our synthetic data creation tool completely bypassed the laborious generation of manually annotated training datasets, and can be easily adjusted to produce images with many different features. The incorporation of synthetic data creation into the development pipeline of CNN-based tools for budding yeast microscopy is a critical step toward the generation of more powerful, widely applicable and user-friendly image processing tools for this microorganism. AVAILABILITY AND IMPLEMENTATION: The synthetic data generation code can be found at https://github.com/prhbrt/synthetic-yeast-cells. The Mask R-CNN as well as the tuning and benchmarking scripts can be found at https://github.com/ymzayek/yeastcells-detection-maskrcnn. We also provide Google Colab scripts that reproduce all the results of this work. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online

    Enabling high-throughput image analysis with deep learning-based tools

    Get PDF
    Microscopes are a valuable tool in biological research, facilitating information gathering with different magnification scales, samples and markers in single-cell and whole-population studies. However, image acquisition and analysis are very time-consuming, so efficient solutions are needed for the required speed-up to allow high-throughput microscopy. Throughout the work presented in this thesis, I developed new computational methods and software packages to facilitate high-throughput microscopy. My work comprised not only the development of these methods themselves but also their integration into the workflow of the lab, starting from automating the microscopy acquisition to deploying scalable analysis services and providing user-friendly local user interfaces. The main focus of my thesis was YeastMate, a tool for automatic detection and segmentation of yeast cells and sub-type classification of their life-cycle transitions. Development of YeastMate was mainly driven by research on quality control mechanisms of the mitochondrial genome in S. cerevisiae, where yeast cells are imaged during their sexual and asexual reproduction life-cycle stages. YeastMate can automatically detect both single cells and life-cycle transitions, perform segmentation and enable pedigree analysis by determining origin and offspring cells. I developed a novel adaptation of the Mask R-CNN object detection model to integrate the classification of inter-cell connections into the usual detection and segmentation analysis pipelines. Another part of my work focused on the automation of microscopes themselves using deep learning models to detect wings of D. melanogaster. A microscope was programmed to acquire large overview images and then to acquire detailed images at higher magnification on the detected coordinates of each wing. The implementation of this workflow replaced the process of manually imaging slides, usually taking hours to do so, with a fully automated, end-to-end solution
    • …
    corecore