50 research outputs found

    Segmentation of Three Dimensional Cell Culture Models from aSingle Focal Plane

    Get PDF
    Three dimensional cell culture models offer new opportunities for development of computational techniques for segmentation and localization. These assays have a unique signature of a clump of cells that correspond to a functioning colony. Often the nuclear compartment is labeled and then imaged with fluorescent microscopy to provide context for protein localization. These colonies are first delineated from background using the level set method. Within each colony, nuclear regions are then bounded by their center of mass through radial voting, and a local neighborhood for each nucleus is established through Voronoi tessellation. Finally, the level set method is applied again within each Voronoi region to delineate the nuclear compartment. The paper concludes with the application of the proposed method to a dataset of experimental data demonstrating a stable solution when iterative radial voting and level set methods are used synergistically

    Segmentation of Three Dimensional Cell Culture Models from a Single Focal Plane

    Full text link

    An Interactive Java Statistical Image Segmentation System: GemIdent

    Get PDF
    Supervised learning can be used to segment/identify regions of interest in images using both color and morphological information. A novel object identification algorithm was developed in Java to locate immune and cancer cells in images of immunohistochemically-stained lymph node tissue from a recent study published by Kohrt et al. (2005). The algorithms are also showing promise in other domains. The success of the method depends heavily on the use of color, the relative homogeneity of object appearance and on interactivity. As is often the case in segmentation, an algorithm specifically tailored to the application works better than using broader methods that work passably well on any problem. Our main innovation is the interactive feature extraction from color images. We also enable the user to improve the classification with an interactive visualization system. This is then coupled with the statistical learning algorithms and intensive feedback from the user over many classification-correction iterations, resulting in a highly accurate and user-friendly solution. The system ultimately provides the locations of every cell recognized in the entire tissue in a text file tailored to be easily imported into R (Ihaka and Gentleman 1996; R Development Core Team 2009) for further statistical analyses. This data is invaluable in the study of spatial and multidimensional relationships between cell populations and tumor structure. This system is available at http://www.GemIdent.com together with three demonstration videos and a manual. The code is now open-sourced and available on github at: https://github.com/kapelner/GemIden

    Computational Methods for Delineating Multiple Nuclear Phenotypes from Different Imaging Modalities

    Get PDF
    Characterizing histopathology or organoid models of breast cancer can provide fundamental knowledge that will lead to a better understanding of tumors, response to therapeutic agents, and discovery of new targeted therapies. To this aim, the delineation of nuclei is significantly interesting since it provides rich information about the aberrant microanatomy or colony formation. For example, (i) cancer cells tend to be larger and, if coupled with high chromatin content, may indicate aneuploidy; (ii) cellular density can be the result of rapid proliferation; (iii) nuclear micro-texture can be a surrogate for fluctuation of heterochromatin patterns, where epigenetic aberrations in cancers are sometimes correlated with alterations in heterochromatin distribution; and (iv) normalized colony formation of cancer cells, in 3D culture, can serve as a surrogate metric for tumor suppression. These evidences suggest that nuclear segmentation and profiling is a major step for subsequent bioinformatics analysis. However, there are two barriers which include technical variations during the sample preparation step and biological heterogeneity since no two patients/samples are alike. As a result of these complexities, extension of deep learning methodologies will have a significant impact on the robust characterization and profiling of pathology sections or organoid models. In this presentation, we demonstrate that integration of regional and contextual representations, within the framework of a deep encoder-decoder architecture, contribute to robust delineation of various nuclear phenotypes from both bright field and confocal microscopy. The deep encoder-decoder architecture can infer perceptual boundaries that are necessary to decompose clumps of nuclei. The method has been validated on pathology section and organoid models of human mammary epithelial cells

    Deep Learning for Detection and Segmentation in High-Content Microscopy Images

    Get PDF
    High-content microscopy led to many advances in biology and medicine. This fast emerging technology is transforming cell biology into a big data driven science. Computer vision methods are used to automate the analysis of microscopy image data. In recent years, deep learning became popular and had major success in computer vision. Most of the available methods are developed to process natural images. Compared to natural images, microscopy images pose domain specific challenges such as small training datasets, clustered objects, and class imbalance. In this thesis, new deep learning methods for object detection and cell segmentation in microscopy images are introduced. For particle detection in fluorescence microscopy images, a deep learning method based on a domain-adapted Deconvolution Network is presented. In addition, a method for mitotic cell detection in heterogeneous histopathology images is proposed, which combines a deep residual network with Hough voting. The method is used for grading of whole-slide histology images of breast carcinoma. Moreover, a method for both particle detection and cell detection based on object centroids is introduced, which is trainable end-to-end. It comprises a novel Centroid Proposal Network, a layer for ensembling detection hypotheses over image scales and anchors, an anchor regularization scheme which favours prior anchors over regressed locations, and an improved algorithm for Non-Maximum Suppression. Furthermore, a novel loss function based on Normalized Mutual Information is proposed which can cope with strong class imbalance and is derived within a Bayesian framework. For cell segmentation, a deep neural network with increased receptive field to capture rich semantic information is introduced. Moreover, a deep neural network which combines both paradigms of multi-scale feature aggregation of Convolutional Neural Networks and iterative refinement of Recurrent Neural Networks is proposed. To increase the robustness of the training and improve segmentation, a novel focal loss function is presented. In addition, a framework for black-box hyperparameter optimization for biomedical image analysis pipelines is proposed. The framework has a modular architecture that separates hyperparameter sampling and hyperparameter optimization. A visualization of the loss function based on infimum projections is suggested to obtain further insights into the optimization problem. Also, a transfer learning approach is presented, which uses only one color channel for pre-training and performs fine-tuning on more color channels. Furthermore, an approach for unsupervised domain adaptation for histopathological slides is presented. Finally, Galaxy Image Analysis is presented, a platform for web-based microscopy image analysis. Galaxy Image Analysis workflows for cell segmentation in cell cultures, particle detection in mice brain tissue, and MALDI/H&E image registration have been developed. The proposed methods were applied to challenging synthetic as well as real microscopy image data from various microscopy modalities. It turned out that the proposed methods yield state-of-the-art or improved results. The methods were benchmarked in international image analysis challenges and used in various cooperation projects with biomedical researchers

    Image analysis and statistical modeling for applications in cytometry and bioprocess control

    Get PDF
    Today, signal processing has a central role in many of the advancements in systems biology. Modern signal processing is required to provide efficient computational solutions to unravel complex problems that are either arduous or impossible to obtain using conventional approaches. For example, imaging-based high-throughput experiments enable cells to be examined at even subcellular level yielding huge amount of image data. Cytometry is an integral part of such experiments and involves measurement of different cell parameters which requires extraction of quantitative experimental values from cell microscopy images. In order to do that for such large number of images, fast and accurate automated image analysis methods are required. In another example, modeling of bioprocesses and their scale-up is a challenging task where different scales have different parameters and often there are more variables than the available number of observations thus requiring special methodology. In many biomedical cell microscopy studies, it is necessary to analyze the images at single cell or even subcellular level since owing to the heterogeneity of cell populations the population-averaged measurements are often inconclusive. Moreover, the emergence of imaging-based high-content screening experiments, especially for drug design, has put single cell analysis at the forefront since it is required to study the dynamics of single-cell gene expressions for tracking and quantification of cell phenotypic variations. The ability to perform single cell analysis depends on the accuracy of image segmentation in detecting individual cells from images. However, clumping of cells at both nuclei and cytoplasm level hinders accurate cell image segmentation. Part of this thesis work concentrates on developing accurate automated methods for segmentation of bright field as well as multichannel fluorescence microscopy images of cells with an emphasis on clump splitting so that cells are separated from each other as well as from background. The complexity in bioprocess development and control crave for the usage of computational modeling and data analysis approaches for process optimization and scale-up. This is also asserted by the fact that obtaining a priori knowledge needed for the development of traditional scale-up criteria may at times be difficult. Moreover, employment of efficient process modeling may provide the added advantage of automatic identification of influential control parameters. Determination of the values of the identified parameters and the ability to predict them at different scales help in process control and in achieving their scale-up. Bioprocess modeling and control can also benefit from single cell analysis where the latter could add a new dimension to the former once imaging-based in-line sensors allow for monitoring of key variables governing the processes. In this thesis we exploited signal processing techniques for statistical modeling of bioprocess and its scale-up as well as for development of fully automated methods for biomedical cell microscopy image segmentation beginning from image pre-processing and initial segmentation to clump splitting and image post-processing with the goal to facilitate the high-throughput analysis. In order to highlight the contribution of this work, we present three application case studies where we applied the developed methods to solve the problems of cell image segmentation and bioprocess modeling and scale-up

    Using cilia mutants to study left-right asymmetry in zebrafish

    Get PDF
    A thesis submitted in fulfillment of the requirements for the degree of the Masters in Molecular Genetics and BiomedicineIn vertebrates, internal organs are positioned asymmetrically across the left-right (L-R) body axis. Events determining L-R asymmetry occur during embryogenesis, and are regulated by the coordinated action of genetic mechanisms. Embryonic motile cilia are essential in this process by generating a directional fluid flow inside the zebrafish organ of asymmetry, called Kupffer’s vesicle ﴾KV). A correct L-R formation is highly dependent on signaling pathways downstream of such flow, however detailed characterization of how its dynamics modulates these mechanisms is still lacking. In this project, fluid flow measurements were achieved by a non-invasive method, in four genetic backgrounds: Wild-type (WT), deltaD-/- mutants, Dnah7 morphants (MO) and control-MO embryos. Knockdown of Dnah7, a heavy chain inner axonemal dynein, renders cilia completely immotile and depletes the KV directional fluid flow, which we characterize here for the first time. By following the development of each embryo, we show that flow dynamics in the KV is already asymmetric and provides a very good prediction of organ laterality. Through novel experiments, we characterized a new population of motile cilia, an immotile population, a range of cilia beat frequencies and lengths, KV volumes and cilia numbers in live embryos. These data were crucial to perform fluid dynamics simulations, which suggested that the flow in embryos with 30 or more cilia reliably produces left situs; with fewer cilia, left situs is sometimes compromised through disruption of the dorsal anterior clustering of motile cilia. A rough estimate based upon the 30 cilium threshold and statistics of cilium number predicts 90% and 60% left situs in WT and deltaD-/- respectively, as observed experimentally. Cilia number and clustering are therefore critical to normal situs via robust asymmetric flow. Thus, our results support a model in which asymmetric flow forces registered in the KV pattern organ laterality in each embryo

    Robust inversion and detection techniques for improved imaging performance

    Full text link
    Thesis (Ph.D.)--Boston UniversityIn this thesis we aim to improve the performance of information extraction from imaging systems through three thrusts. First, we develop improved image formation methods for physics-based, complex-valued sensing problems. We propose a regularized inversion method that incorporates prior information about the underlying field into the inversion framework for ultrasound imaging. We use experimental ultrasound data to compute inversion results with the proposed formulation and compare it with conventional inversion techniques to show the robustness of the proposed technique to loss of data. Second, we propose methods that combine inversion and detection in a unified framework to improve imaging performance. This framework is applicable for cases where the underlying field is label-based such that each pixel of the underlying field can only assume values from a discrete, limited set. We consider this unified framework in the context of combinatorial optimization and propose graph-cut based methods that would result in label-based images, thereby eliminating the need for a separate detection step. Finally, we propose a robust method of object detection from microscopic nanoparticle images. In particular, we focus on a portable, low cost interferometric imaging platform and propose robust detection algorithms using tools from computer vision. We model the electromagnetic image formation process and use this model to create an enhanced detection technique. The effectiveness of the proposed technique is demonstrated using manually labeled ground-truth data. In addition, we extend these tools to develop a detection based autofocusing algorithm tailored for the high numerical aperture interferometric microscope
    corecore