957 research outputs found

    Automatic Segmentation of Cells of Different Types in Fluorescence Microscopy Images

    Get PDF
    Recognition of different cell compartments, types of cells, and their interactions is a critical aspect of quantitative cell biology. This provides a valuable insight for understanding cellular and subcellular interactions and mechanisms of biological processes, such as cancer cell dissemination, organ development and wound healing. Quantitative analysis of cell images is also the mainstay of numerous clinical diagnostic and grading procedures, for example in cancer, immunological, infectious, heart and lung disease. Computer automation of cellular biological samples quantification requires segmenting different cellular and sub-cellular structures in microscopy images. However, automating this problem has proven to be non-trivial, and requires solving multi-class image segmentation tasks that are challenging owing to the high similarity of objects from different classes and irregularly shaped structures. This thesis focuses on the development and application of probabilistic graphical models to multi-class cell segmentation. Graphical models can improve the segmentation accuracy by their ability to exploit prior knowledge and model inter-class dependencies. Directed acyclic graphs, such as trees have been widely used to model top-down statistical dependencies as a prior for improved image segmentation. However, using trees, a few inter-class constraints can be captured. To overcome this limitation, polytree graphical models are proposed in this thesis that capture label proximity relations more naturally compared to tree-based approaches. Polytrees can effectively impose the prior knowledge on the inclusion of different classes by capturing both same-level and across-level dependencies. A novel recursive mechanism based on two-pass message passing is developed to efficiently calculate closed form posteriors of graph nodes on polytrees. Furthermore, since an accurate and sufficiently large ground truth is not always available for training segmentation algorithms, a weakly supervised framework is developed to employ polytrees for multi-class segmentation that reduces the need for training with the aid of modeling the prior knowledge during segmentation. Generating a hierarchical graph for the superpixels in the image, labels of nodes are inferred through a novel efficient message-passing algorithm and the model parameters are optimized with Expectation Maximization (EM). Results of evaluation on the segmentation of simulated data and multiple publicly available fluorescence microscopy datasets indicate the outperformance of the proposed method compared to state-of-the-art. The proposed method has also been assessed in predicting the possible segmentation error and has been shown to outperform trees. This can pave the way to calculate uncertainty measures on the resulting segmentation and guide subsequent segmentation refinement, which can be useful in the development of an interactive segmentation framework

    Visualization of graphs and trees for software analysis

    Get PDF
    A software architecture is an abstraction of a software system, which is indispensable for many software engineering tasks. Unfortunately, in many cases information pertaining to the software architecture is not available, outdated, or inappropriate for the task at hand. The RECONSTRUCTOR project focuses on software architecture reconstruction, i.e., obtaining architectural information from an existing system. Our research, which is part of RECONSTRUCTOR, focuses on interactive visualization and tries to answer the following question: How can users be enabled to understand the large amounts of information relevant for program understanding using visual representations? To answer this question, we have iteratively developed a number of techniques for visualizing software systems. A large number of these cases consists of hierarchically organized data, combined with adjacency relations. Examples are function calls within a hierarchically organized software system and correspondence relations between two different versions of a hierarchically organized software system. Hierarchical Edge Bundles (HEBs) are used to visualize adjacency relations in hierarchically organized data, such as the aforementioned function calls within a software system. HEBs significantly reduce visual clutter by visually bundling relations together. Massive Sequence Views (MSVs) are used in conjunction with HEBs to enable analysis of sequences of relations, such as function-call traces. HEBs are furthermore used to visually compare hierarchically organized data, e.g., two different versions of a software system. HEBs visually emphasize splits, joins, and relocations of subhierarchies and provide for interactive selection of sets of relations. Since HEBs require a hierarchy to perform the bundling, we present Force-Directed Edge Bundles (FDEBs) as an alternative to visually bundle relations together in the absence of a hierarchical component. FDEBs use a self-organizing approach to bundling in which edges are modeled as flexible springs that can attract each other. As a result, visual clutter is reduced and high-level edge patterns are better visible. Finally, in all these methods, a clear depiction of the direction of edges is important. We have therefore performed a separate study in which we evaluated ten representations (including the standard arrow) for depicting directed edges in a controlled user study

    Model-Based Time Series Management at Scale

    Get PDF

    Bayesian Polytrees With Learned Deep Features for Multi-Class Cell Segmentation.

    Get PDF
    The recognition of different cell compartments, the types of cells, and their interactions is a critical aspect of quantitative cell biology. However, automating this problem has proven to be non-trivial and requires solving multi-class image segmentation tasks that are challenging owing to the high similarity of objects from different classes and irregularly shaped structures. To alleviate this, graphical models are useful due to their ability to make use of prior knowledge and model inter-class dependences. Directed acyclic graphs, such as trees, have been widely used to model top-down statistical dependences as a prior for improved image segmentation. However, using trees, a few inter-class constraints can be captured. To overcome this limitation, we propose polytree graphical models that capture label proximity relations more naturally compared to tree-based approaches. A novel recursive mechanism based on two-pass message passing was developed to efficiently calculate closed-form posteriors of graph nodes on polytrees. The algorithm is evaluated on simulated data and on two publicly available fluorescence microscopy datasets, outperforming directed trees and three state-of-the-art convolutional neural networks, namely, SegNet, DeepLab, and PSPNet. Polytrees are shown to outperform directed trees in predicting segmentation error by highlighting areas in the segmented image that do not comply with prior knowledge. This paves the way to uncertainty measures on the resulting segmentation and guide subsequent segmentation refinement

    Current Status and Future Development of Structuring and Modeling Intelligent Appearing Motion

    Get PDF
    The two topics covered by this symposium were intelligent appearing motion and Virtual Environments (VE). Both of these are broad research areas with enough content to fill large conferences. Their intersection has become important due to conceptual and technological advances enabling the introduction of intelligent appearing motion into Virtual Environments. This union brings new integration challenges and opportunities, some of which were examined at this symposium. This chapter was inspired by the contributions of several of the conference participants, but is not a complete review of all presentations. It will hopefully serve as a basis for formulating a new approach to the understanding of motion within V

    Automated decision making and problem solving. Volume 2: Conference presentations

    Get PDF
    Related topics in artificial intelligence, operations research, and control theory are explored. Existing techniques are assessed and trends of development are determined

    Personalized face and gesture analysis using hierarchical neural networks

    Full text link
    The video-based computational analyses of human face and gesture signals encompass a myriad of challenging research problems involving computer vision, machine learning and human computer interaction. In this thesis, we focus on the following challenges: a) the classification of hand and body gestures along with the temporal localization of their occurrence in a continuous stream, b) the recognition of facial expressivity levels in people with Parkinson's Disease using multimodal feature representations, c) the prediction of student learning outcomes in intelligent tutoring systems using affect signals, and d) the personalization of machine learning models, which can adapt to subject and group-specific nuances in facial and gestural behavior. Specifically, we first conduct a quantitative comparison of two approaches to the problem of segmenting and classifying gestures on two benchmark gesture datasets: a method that simultaneously segments and classifies gestures versus a cascaded method that performs the tasks sequentially. Second, we introduce a framework that computationally predicts an accurate score for facial expressivity and validate it on a dataset of interview videos of people with Parkinson's disease. Third, based on a unique dataset of videos of students interacting with MathSpring, an intelligent tutoring system, collected by our collaborative research team, we build models to predict learning outcomes from their facial affect signals. Finally, we propose a novel solution to a relatively unexplored area in automatic face and gesture analysis research: personalization of models to individuals and groups. We develop hierarchical Bayesian neural networks to overcome the challenges posed by group or subject-specific variations in face and gesture signals. We successfully validate our formulation on the problems of personalized subject-specific gesture classification, context-specific facial expressivity recognition and student-specific learning outcome prediction. We demonstrate the flexibility of our hierarchical framework by validating the utility of both fully connected and recurrent neural architectures
    • …
    corecore