1,188 research outputs found

    Hidden Markov Models for Analysis of Multimodal Biomedical Images

    Get PDF
    Modern advances in imaging technology have enabled the collection of huge amounts of multimodal imagery of complex biological systems. The extraction of information from this data and subsequent analysis are essential in understanding the architecture and dynamics of these systems. Due to the sheer volume of the data, manual annotation and analysis is usually infeasible, and robust automated techniques are the need of the hour. In this dissertation, we present three hidden Markov model (HMM)-based methods for automated analysis of multimodal biomedical images. First, we outline a novel approach to simultaneously classify and segment multiple cells of different classes in multi-biomarker images. A 2D HMM is set up on the superpixel lattice obtained from the input image. Parameters ensuring spatial consistency of labels and high confidence in local class selection are embedded in the HMM framework, and learnt with the objective of maximizing discrimination between classes. Optimal labels are inferred using the HMM, and are aggregated to obtain global multiple object segmentation. We then address the problem of automated spatial alignment of images from different modalities. We propose a probabilistic framework, constructed using a 2D HMM, for deformable registration of multimodal images. The HMM is tailored to capture deformation via state transitions, and modality-specific representation via class-conditional emission probabilities. The latter aspect is premised on the realization that different modalities may provide very different representation for a given class of objects. Parameters of the HMM are learned from data, and hence the method is applicable to a wide array of datasets. In the final part of the dissertation, we describe a method for automated segmentation and subsequent tracking of cells in a challenging target image modality, wherein useful information from a complementary (source) modality is effectively utilized to assist segmentation. Labels are estimated in the source domain, and then transferred to generate preliminary segmentations in the target domain. A 1D HMM-based algorithm is used to refine segmentation boundaries in the target image, and subsequently track cells through a 3D image stack. This dissertation details techniques for classification, segmentation and registration, that together form a comprehensive system for automated analysis of multimodal biomedical datasets

    Deep Learning in Single-Cell Analysis

    Full text link
    Single-cell technologies are revolutionizing the entire field of biology. The large volumes of data generated by single-cell technologies are high-dimensional, sparse, heterogeneous, and have complicated dependency structures, making analyses using conventional machine learning approaches challenging and impractical. In tackling these challenges, deep learning often demonstrates superior performance compared to traditional machine learning methods. In this work, we give a comprehensive survey on deep learning in single-cell analysis. We first introduce background on single-cell technologies and their development, as well as fundamental concepts of deep learning including the most popular deep architectures. We present an overview of the single-cell analytic pipeline pursued in research applications while noting divergences due to data sources or specific applications. We then review seven popular tasks spanning through different stages of the single-cell analysis pipeline, including multimodal integration, imputation, clustering, spatial domain identification, cell-type deconvolution, cell segmentation, and cell-type annotation. Under each task, we describe the most recent developments in classical and deep learning methods and discuss their advantages and disadvantages. Deep learning tools and benchmark datasets are also summarized for each task. Finally, we discuss the future directions and the most recent challenges. This survey will serve as a reference for biologists and computer scientists, encouraging collaborations.Comment: 77 pages, 11 figures, 15 tables, deep learning, single-cell analysi

    Model-based cell tracking and analysis in fluorescence microscopic

    Get PDF

    Model-based cell tracking and analysis in fluorescence microscopic

    Get PDF

    CellCognition : time-resolved phenotype annotation in high-throughput live cell imaging

    Get PDF
    Author Posting. © The Authors, 2010. This is the author's version of the work. It is posted here by permission of Nature Publishing Group for personal use, not for redistribution. The definitive version was published in Nature Methods 7 (2010): 747-754, doi:10.1038/nmeth.1486.Fluorescence time-lapse imaging has become a powerful tool to investigate complex dynamic processes such as cell division or intracellular trafficking. Automated microscopes generate time-resolved imaging data at high throughput, yet tools for quantification of large-scale movie data are largely missing. Here, we present CellCognition, a computational framework to annotate complex cellular dynamics. We developed a machine learning method that combines state-of-the-art classification with hidden Markov modeling for annotation of the progression through morphologically distinct biological states. The incorporation of time information into the annotation scheme was essential to suppress classification noise at state transitions, and confusion between different functional states with similar morphology. We demonstrate generic applicability in a set of different assays and perturbation conditions, including a candidate-based RNAi screen for mitotic exit regulators in human cells. CellCognition is published as open source software, enabling live imaging-based screening with assays that directly score cellular dynamics.Work in the Gerlich laboratory is supported by Swiss National Science Foundation (SNF) research grant 3100A0-114120, SNF ProDoc grant PDFMP3_124904, a European Young Investigator (EURYI) award of the European Science Foundation, an EMBO YIP fellowship, and a MBL Summer Research Fellowship to D.W.G., an ETH TH grant, a grant by the UBS foundation, a Roche Ph.D. fellowship to M.H.A.S, and a Mueller fellowship of the Molecular Life Sciences Ph.D. program Zurich to M.H. M.H. and M.H.A.S are fellows of the Zurich Ph.D. Program in Molecular Life Sciences. B.F. was supported by European Commission’s seventh framework program project Cancer Pathways. Work in the Ellenberg laboratory is supported by a European Commission grant within the Mitocheck consortium (LSHG-CT-2004-503464). Work in the Peter laboratory is supported by the ETHZ, Oncosuisse, SystemsX.ch (LiverX) and the SNF

    A proposal for a coordinated effort for the determination of brainwide neuroanatomical connectivity in model organisms at a mesoscopic scale

    Get PDF
    In this era of complete genomes, our knowledge of neuroanatomical circuitry remains surprisingly sparse. Such knowledge is however critical both for basic and clinical research into brain function. Here we advocate for a concerted effort to fill this gap, through systematic, experimental mapping of neural circuits at a mesoscopic scale of resolution suitable for comprehensive, brain-wide coverage, using injections of tracers or viral vectors. We detail the scientific and medical rationale and briefly review existing knowledge and experimental techniques. We define a set of desiderata, including brain-wide coverage; validated and extensible experimental techniques suitable for standardization and automation; centralized, open access data repository; compatibility with existing resources, and tractability with current informatics technology. We discuss a hypothetical but tractable plan for mouse, additional efforts for the macaque, and technique development for human. We estimate that the mouse connectivity project could be completed within five years with a comparatively modest budget.Comment: 41 page

    Automatic Segmentation of Cells of Different Types in Fluorescence Microscopy Images

    Get PDF
    Recognition of different cell compartments, types of cells, and their interactions is a critical aspect of quantitative cell biology. This provides a valuable insight for understanding cellular and subcellular interactions and mechanisms of biological processes, such as cancer cell dissemination, organ development and wound healing. Quantitative analysis of cell images is also the mainstay of numerous clinical diagnostic and grading procedures, for example in cancer, immunological, infectious, heart and lung disease. Computer automation of cellular biological samples quantification requires segmenting different cellular and sub-cellular structures in microscopy images. However, automating this problem has proven to be non-trivial, and requires solving multi-class image segmentation tasks that are challenging owing to the high similarity of objects from different classes and irregularly shaped structures. This thesis focuses on the development and application of probabilistic graphical models to multi-class cell segmentation. Graphical models can improve the segmentation accuracy by their ability to exploit prior knowledge and model inter-class dependencies. Directed acyclic graphs, such as trees have been widely used to model top-down statistical dependencies as a prior for improved image segmentation. However, using trees, a few inter-class constraints can be captured. To overcome this limitation, polytree graphical models are proposed in this thesis that capture label proximity relations more naturally compared to tree-based approaches. Polytrees can effectively impose the prior knowledge on the inclusion of different classes by capturing both same-level and across-level dependencies. A novel recursive mechanism based on two-pass message passing is developed to efficiently calculate closed form posteriors of graph nodes on polytrees. Furthermore, since an accurate and sufficiently large ground truth is not always available for training segmentation algorithms, a weakly supervised framework is developed to employ polytrees for multi-class segmentation that reduces the need for training with the aid of modeling the prior knowledge during segmentation. Generating a hierarchical graph for the superpixels in the image, labels of nodes are inferred through a novel efficient message-passing algorithm and the model parameters are optimized with Expectation Maximization (EM). Results of evaluation on the segmentation of simulated data and multiple publicly available fluorescence microscopy datasets indicate the outperformance of the proposed method compared to state-of-the-art. The proposed method has also been assessed in predicting the possible segmentation error and has been shown to outperform trees. This can pave the way to calculate uncertainty measures on the resulting segmentation and guide subsequent segmentation refinement, which can be useful in the development of an interactive segmentation framework
    • …
    corecore