25 research outputs found

    Uncertainty in cell confluency measurements

    Get PDF
    Pharmaceutical industries have declared their need of metrology in the cellular field, to improve new drugs developing time and costs by high-content screening technologies. Cell viability and proliferation tests largely use confluency of cells on a bi-dimensional (2D) surface as a biological measurand. The confluency is measured from images of 2D surface acquired via microscopy techniques. The plethora of algorithms already in use aims at recognizing objects from images and identifies a threshold to distinguish objects from the background. The reference method is the visual assessment from an operator and any objective uncertainty estimation is not yet available. A method to estimate the image analysis contribution to confluency uncertainty is here proposed. A maximum and a minimum threshold are identified from a visual assessment of the free edge of the cells. An application to a fluorescence microscopy image of 2D of PT-45 cell cultures is reported. Results shows that the method can be a promising solution to associate an uncertainty to cell confluency measurements to enhance reliability and efficiency of high-content screening technologies

    STSE: Spatio-Temporal Simulation Environment Dedicated to Biology

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Recently, the availability of high-resolution microscopy together with the advancements in the development of biomarkers as reporters of biomolecular interactions increased the importance of imaging methods in molecular cell biology. These techniques enable the investigation of cellular characteristics like volume, size and geometry as well as volume and geometry of intracellular compartments, and the amount of existing proteins in a spatially resolved manner. Such detailed investigations opened up many new areas of research in the study of spatial, complex and dynamic cellular systems. One of the crucial challenges for the study of such systems is the design of a well stuctured and optimized workflow to provide a systematic and efficient hypothesis verification. Computer Science can efficiently address this task by providing software that facilitates handling, analysis, and evaluation of biological data to the benefit of experimenters and modelers.</p> <p>Results</p> <p>The Spatio-Temporal Simulation Environment (STSE) is a set of <it>open-source </it>tools provided to conduct spatio-temporal simulations in discrete structures based on microscopy images. The framework contains modules to <it>digitize, represent, analyze</it>, and <it>mathematically model </it>spatial distributions of biochemical species. Graphical user interface (GUI) tools provided with the software enable meshing of the simulation space based on the Voronoi concept. In addition, it supports to automatically acquire spatial information to the mesh from the images based on pixel luminosity (e.g. corresponding to molecular levels from microscopy images). STSE is freely available either as a stand-alone version or included in the linux live distribution Systems Biology Operational Software (SB.OS) and can be downloaded from <url>http://www.stse-software.org/</url>. The Python source code as well as a comprehensive user manual and video tutorials are also offered to the research community. We discuss main concepts of the STSE design and workflow. We demonstrate it's usefulness using the example of a signaling cascade leading to formation of a morphological gradient of Fus3 within the cytoplasm of the mating yeast cell <it>Saccharomyces cerevisiae</it>.</p> <p>Conclusions</p> <p>STSE is an efficient and powerful novel platform, designed for computational handling and evaluation of microscopic images. It allows for an uninterrupted workflow including digitization, representation, analysis, and mathematical modeling. By providing the means to relate the simulation to the image data it allows for systematic, image driven model validation or rejection. STSE can be scripted and extended using the Python language. STSE should be considered rather as an API together with workflow guidelines and a collection of GUI tools than a stand alone application. The priority of the project is to provide an easy and intuitive way of extending and customizing software using the Python language.</p

    Interactive machine learning for fast and robust cell profiling

    Get PDF
    Automated profiling of cell morphology is a powerful tool for inferring cell function. However, this technique retains a high barrier to entry. In particular, configuring image processing parameters for optimal cell profiling is susceptible to cognitive biases and dependent on user experience. Here, we use interactive machine learning to identify the optimum cell profiling configuration that maximises quality of the cell profiling outcome. The process is guided by the user, from whom a rating of the quality of a cell profiling configuration is obtained. We use Bayesian optimisation, an established machine learning algorithm, to learn from this information and automatically recommend the next configuration to examine with the aim of maximising the quality of the processing or analysis. Compared to existing interactive machine learning tools that require domain expertise for per-class or per-pixel annotations, we rely on users’ explicit assessment of output quality of the cell profiling task at hand. We validated our interactive approach against the standard human trial-and-error scheme to optimise an object segmentation task using the standard software CellProfiler. Our toolkit enabled rapid optimisation of an object segmentation pipeline, increasing the quality of object segmentation over a pipeline optimised through trial-and-error. Users also attested to the ease of use and reduced cognitive load enabled by our machine learning strategy over the standard approach. We envision that our interactive machine learning approach can enhance the quality and efficiency of pipeline optimisation to democratise image-based cell profiling

    Automated annotation of gene expression image sequences via non-parametric factor analysis and conditional random fields

    Get PDF
    Motivation: Computational approaches for the annotation of phenotypes from image data have shown promising results across many applications, and provide rich and valuable information for studying gene function and interactions. While data are often available both at high spatial resolution and across multiple time points, phenotypes are frequently annotated independently, for individual time points only. In particular, for the analysis of developmental gene expression patterns, it is biologically sensible when images across multiple time points are jointly accounted for, such that spatial and temporal dependencies are captured simultaneously. Methods: We describe a discriminative undirected graphical model to label gene-expression time-series image data, with an efficient training and decoding method based on the junction tree algorithm. The approach is based on an effective feature selection technique, consisting of a non-parametric sparse Bayesian factor analysis model. The result is a flexible framework, which can handle large-scale data with noisy incomplete samples, i.e. it can tolerate data missing from individual time points. Results: Using the annotation of gene expression patterns across stages of Drosophila embryonic development as an example, we demonstrate that our method achieves superior accuracy, gained by jointly annotating phenotype sequences, when compared with previous models that annotate each stage in isolation. The experimental results on missing data indicate that our joint learning method successfully annotates genes for which no expression data are available for one or more stages

    Automatic Annotation of Spatial Expression Patterns via Sparse Bayesian Factor Models

    Get PDF
    Advances in reporters for gene expression have made it possible to document and quantify expression patterns in 2D–4D. In contrast to microarrays, which provide data for many genes but averaged and/or at low resolution, images reveal the high spatial dynamics of gene expression. Developing computational methods to compare, annotate, and model gene expression based on images is imperative, considering that available data are rapidly increasing. We have developed a sparse Bayesian factor analysis model in which the observed expression diversity of among a large set of high-dimensional images is modeled by a small number of hidden common factors. We apply this approach on embryonic expression patterns from a Drosophila RNA in situ image database, and show that the automatically inferred factors provide for a meaningful decomposition and represent common co-regulation or biological functions. The low-dimensional set of factor mixing weights is further used as features by a classifier to annotate expression patterns with functional categories. On human-curated annotations, our sparse approach reaches similar or better classification of expression patterns at different developmental stages, when compared to other automatic image annotation methods using thousands of hard-to-interpret features. Our study therefore outlines a general framework for large microscopy data sets, in which both the generative model itself, as well as its application for analysis tasks such as automated annotation, can provide insight into biological questions

    A Bayesian method for inferring quantitative information from FRET data

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Understanding biological networks requires identifying their elementary protein interactions and establishing the timing and strength of those interactions. Fluorescence microscopy and Förster resonance energy transfer (FRET) have the potential to reveal such information because they allow molecular interactions to be monitored in living cells, but it is unclear how best to analyze FRET data. Existing techniques differ in assumptions, manipulations of data and the quantities they derive. To address this variation, we have developed a versatile Bayesian analysis based on clear assumptions and systematic statistics.</p> <p>Results</p> <p>Our algorithm infers values of the FRET efficiency and dissociation constant, <it>K<sub>d</sub></it>, between a pair of fluorescently tagged proteins. It gives a posterior probability distribution for these parameters, conveying more extensive information than single-value estimates can. The width and shape of the distribution reflects the reliability of the estimate and we used simulated data to determine how measurement noise, data quantity and fluorophore concentrations affect the inference. We are able to show why varying concentrations of donors and acceptors is necessary for estimating <it>K<sub>d</sub></it>. We further demonstrate that the inference improves if additional knowledge is available, for example of the FRET efficiency, which could be obtained from separate fluorescence lifetime measurements.</p> <p>Conclusions</p> <p>We present a general, systematic approach for extracting quantitative information on molecular interactions from FRET data. Our method yields both an estimate of the dissociation constant and the uncertainty associated with that estimate. The information produced by our algorithm can help design optimal experiments and is fundamental for developing mathematical models of biochemical networks.</p

    Image Quality Ranking Method for Microscopy

    Get PDF
    Automated analysis of microscope images is necessitated by the increased need for high-resolution follow up of events in time. Manually finding the right images to be analyzed, or eliminated from data analysis are common day-to-day problems in microscopy research today, and the constantly growing size of image datasets does not help the matter. We propose a simple method and a software tool for sorting images within a dataset, according to their relative quality. We demonstrate the applicability of our method in finding good quality images in a STED microscope sample preparation optimization image dataset. The results are validated by comparisons to subjective opinion scores, as well as five state-of-the-art blind image quality assessment methods. We also show how our method can be applied to eliminate useless out-of-focus images in a High-Content-Screening experiment. We further evaluate the ability of our image quality ranking method to detect out-of-focus images, by extensive simulations, and by comparing its performance against previously published, well-established microscopy autofocus metrics

    3D imaging and quantitative analysis of intact tissues and organs

    Get PDF
    Embryonic development and tumor growth are highly complex and dynamic processes that exist in both time and space. To fully understand the molecular mechanisms that control these processes, it is crucial to study RNA expression and protein translation with single-cell spatiotemporal resolution. This is feasible by microscopic imaging that enables multidimensional assessments of cells, tissues, and organs. Here, a time-lapse calcium imaging and three-dimensional imaging was used to study physiological development of the brain or pathological development of cancer, respectively. In Paper I, spatiotemporal calcium imaging revealed a new mechanism of neurogenesis during brain development. In Paper II, a new clearing method of clinically stored specimens, DIPCO (diagnosing immunolabeled paraffin-embedded cleared organs), was developed that allows better characterization and staging of intact human tumors. In Paper III, the DIPCO method was applied to determine tumor stage and characterize the microlymphatic system in bladder cancer. In Paper IV, a novel method for RNA labeling of volumetric specimens, DIIFCO (diagnosing in situ and immunofluorescence-labeled cleared onco-sample) was developed to study RNAs expression and localization in intact tumors. Overall, the aim of the thesis was to demonstrate that multidimensional imaging extends the understanding of both physiological and pathological biological developmental processes
    corecore