12 research outputs found

    Automatic Segmentation of Cells of Different Types in Fluorescence Microscopy Images

    Get PDF
    Recognition of different cell compartments, types of cells, and their interactions is a critical aspect of quantitative cell biology. This provides a valuable insight for understanding cellular and subcellular interactions and mechanisms of biological processes, such as cancer cell dissemination, organ development and wound healing. Quantitative analysis of cell images is also the mainstay of numerous clinical diagnostic and grading procedures, for example in cancer, immunological, infectious, heart and lung disease. Computer automation of cellular biological samples quantification requires segmenting different cellular and sub-cellular structures in microscopy images. However, automating this problem has proven to be non-trivial, and requires solving multi-class image segmentation tasks that are challenging owing to the high similarity of objects from different classes and irregularly shaped structures. This thesis focuses on the development and application of probabilistic graphical models to multi-class cell segmentation. Graphical models can improve the segmentation accuracy by their ability to exploit prior knowledge and model inter-class dependencies. Directed acyclic graphs, such as trees have been widely used to model top-down statistical dependencies as a prior for improved image segmentation. However, using trees, a few inter-class constraints can be captured. To overcome this limitation, polytree graphical models are proposed in this thesis that capture label proximity relations more naturally compared to tree-based approaches. Polytrees can effectively impose the prior knowledge on the inclusion of different classes by capturing both same-level and across-level dependencies. A novel recursive mechanism based on two-pass message passing is developed to efficiently calculate closed form posteriors of graph nodes on polytrees. Furthermore, since an accurate and sufficiently large ground truth is not always available for training segmentation algorithms, a weakly supervised framework is developed to employ polytrees for multi-class segmentation that reduces the need for training with the aid of modeling the prior knowledge during segmentation. Generating a hierarchical graph for the superpixels in the image, labels of nodes are inferred through a novel efficient message-passing algorithm and the model parameters are optimized with Expectation Maximization (EM). Results of evaluation on the segmentation of simulated data and multiple publicly available fluorescence microscopy datasets indicate the outperformance of the proposed method compared to state-of-the-art. The proposed method has also been assessed in predicting the possible segmentation error and has been shown to outperform trees. This can pave the way to calculate uncertainty measures on the resulting segmentation and guide subsequent segmentation refinement, which can be useful in the development of an interactive segmentation framework

    Multi-label classification by polytree-augmented classifier chains with label-dependent features

    Get PDF
    Multi-label classification faces several critical challenges, including modeling label correlations, mitigating label imbalance, removing irrelevant and redundant features, and reducing the complexity for large-scale problems. To address these issues, in this paper, we propose a novel methodpolytree-augmented classifier chains with label-dependent featuresthat models label correlations through flexible polytree structures based on low-dimensional label-dependent feature spaces learned by a two-stage feature selection approach. First, a feature weighting approach is applied to efficiently remove irrelevant features for each label and mitigate the effect of label imbalance. Second, a polytree structure is built in the label space using estimated conditional mutual information. Third, an appropriate label-dependent feature subset is found by taking account of label correlations in the polytree. Extensive empirical studies on six synthetic datasets and 12 real-world datasets demonstrate the superior performance of the proposed method. In addition, by incorporating the proposed two-stage feature selection approach, the multi-label classifiers with label-dependent features achieve on average 9.4% performance improvement in Exact-Match compared with the original classifiers

    On risk-based decision-making for structural health monitoring

    Get PDF
    Structural health monitoring (SHM) technologies seek to detect, localise, and characterise damage present within structures and infrastructure. Arguably, the foremost incentive for developing and implementing SHM systems is to improve the quality of operation and maintenance (O&M) strategies for structures, such that safety can be enhanced, or greater economic benefits can be realised. Given this motivation, SHM systems can be considered primarily as decision-support tools. Although much research has been conducted into damage identification and characterisation approaches, there has been relatively little that has explicitly considered the decision-making applications of SHM systems. In light of this fact, the current thesis seeks to consider decision-making for SHM with respect to risk. Risk, defined as a product of probability and cost, can be interpreted as an expected utility. The keystone of the current thesis is a general framework for conducting risk-based, SHM generated by combining aspects of probabilistic risk assessment (PRA) with the existing statistical pattern recognition paradigm for SHM. The framework, founded on probabilistic graphical models (PGMs), utilises Bayesian network representations of fault-trees to facilitate the flow of information between observations of discriminative features to failure states of structures of interest. Using estimations of failure probabilities in conjunction with utility functions that capture the severity of consequences enables risk assessments -- these risks can be minimised with respect to candidate maintenance actions to determine optimal strategies. Key elements of the decision framework are examined; in particular, a physics-based methodology for initialising a structural degradation model defining health-state transition probabilities is presented. The risk-based framework allows aspects of SHM systems to be developed with explicit consideration for the decision-support applications. In relation to this aim, the current thesis proposes a novel approach to learn statistical classification models within an online SHM system. The approach adopts an active learning framework in which descriptive labels, corresponding to salient health states of a structure, are obtained via structural inspections. To account for the decision processes associated with SHM, structural inspections are mandated according to the expected value of information for data-labels. The resulting risk-based active learning algorithm is shown to yield cost-effective improvements in the performance of decision-making agents, in addition to reducing the number of manual inspections made over the course of a monitoring campaign. Characteristics of the risk-based active learning algorithm are further investigated, with particular focus on the effects of \sampling bias. Sampling bias is known to degrade decision-making performance over time, thus engineers have a vested interest in mitigating its negative effects. On this theme, two approaches are considered for improving risk-based active learning; semi-supervised learning, and discriminative classification models. Semi-supervised learning yielded mixed results, with performance being highly dependent on base distributions being representative of the underlying data. On the other hand, discriminative classifiers performed strongly across the board. It is shown that by mitigating the negative effects of sampling bias via classifier and algorithm design, decision-support systems can be enhanced, resulting in more cost-effective O&M strategies. Finally, the future of risk-based decision-making is considered. Particular attention is given to population-based structural health monitoring (PBSHM), and the management of fleets of assets. The hierarchical representation of structures used to develop the risk-based SHM framework is extended to populations of structures. Initial research into PBSHM shows promising results with respect to the transfer of information between individual structures comprising a population. The significance of these results in the context of decision-making is discussed. To summarise, by framing SHM systems as decision-support tools, risk-informed O&M strategies can be developed for structures and infrastructure such that safety is improved and costs are reduced

    Generalized belief change with imprecise probabilities and graphical models

    Get PDF
    We provide a theoretical investigation of probabilistic belief revision in complex frameworks, under extended conditions of uncertainty, inconsistency and imprecision. We motivate our kinematical approach by specializing our discussion to probabilistic reasoning with graphical models, whose modular representation allows for efficient inference. Most results in this direction are derived from the relevant work of Chan and Darwiche (2005), that first proved the inter-reducibility of virtual and probabilistic evidence. Such forms of information, deeply distinct in their meaning, are extended to the conditional and imprecise frameworks, allowing further generalizations, e.g. to experts' qualitative assessments. Belief aggregation and iterated revision of a rational agent's belief are also explored

    Maritime Augmented Reality mit a prioriWissen aus Seekarten

    Get PDF
    The main objective of this thesis is to provide a concept to augment mar- itime sea chart information into the camera view of the user. The benefit is the simpler navigation due to the offered 3D information and the overlay onto the real 3D environment. In the maritime context special conditions hold. The sensor technologies have to be reliable in the environment of a ship’s ferrous construction. The aug- mentation of the objects has to be very precise due to the far distances of observable objects on the sea surface. Furthermore, the approach has to be reliable due to the wide range of light conditions. For a practical solution, the system has to be mobile, light-weight and with a real-time performance. To achieve this goal, the requirements are set, the possible measurement units and the data base structure are presented. First, the requirements are analyzed and a suitable system is designed. By the combination of proper sensor techniques, the local position and orienta- tion of the user can be estimated. To verify the concept, several prototypes with exchangeable units have been evaluated. This first concept is based on a marker-based approach which leads to some drawbacks. To overcome the drawbacks, the second aspect is the improvement of the sys- tem and the analysis of markerless approaches. One possible strategy will be presented. The approach uses the statistical technique of Bayesian networks to vote for single objects in the environment. By this procedure it will be shown, that due to the a priori information the underlying sea chart system has the most benefit. The analysis of the markerless approach shows, that the sea charts structure has to be adapted to the new requirements of interactive 3D augmentation scenes. After the analysis of the chart data concept, an approach for the optimization of the charts by building up an object-to-object topology within the charts data and the Bayesian object detection approach is presented. Finally, several evaluations show the performance of the imple- mented evaluation application.Diese Arbeit stellt ein Konzept zur Verfügung, um Seekarteninformationen in eine Kamera so einzublenden, dass die Informationen lagerichtig im Sichtfeld des Benutzers erscheinen. Der Mehrwert ist eine einfachere Navigation durch die Nutzung von 3D-Symbolen in der realen Umgebung. Im maritimen Umfeld gelten besondere Anforderungen an die Aufgabenstellung. Die genutzten Sensoren müssen in der Lage sein, robuste Daten in Anwesenheit der eisenhaltigen Materialien auf dem Schiff zu liefern. Die Augmentierung muss hoch genau berechnet werden, da die beobachtbaren Objekte zum Teil sehr weit entfernt auf der Meeresoberfläche verteilt sind. Weiterhin gelten die Bedingungen einer Außenumgebung, wie variierende Wetter- und Lichtbedingungen. Um eine praktikable Anwendung gewährleisten zu können, ist ein mobiles, leicht-gewichtiges und echtzeitfähiges System zu entwickeln. In dieser Arbeit werden die Anforderungen gesetzt und Konzepte für die Hardware- und Softwarelösungen beschrieben. Im ersten Teil werden die Anforderungen analysiert und ein geeignetes Hardwaresystem entwickelt. Durch die passende Kombination von Sensortechnologien kann damit die lokale Position und Orientierung des Benutzers berechnet werden. Um das Konzept zu evaluieren sind verschiedene modulare Hardware- und Softwarekonzepte als Prototypen umgesetzt worden. Das erste Softwarekonzept befasst sich mit einem markerbasierten Erkennungsalgorithmus, der in der Evaluation einige Nachteile zeigt. Dementsprechende Verbesserungen wurden in einem zweiten Softwarekonzept durch einen markerlosen Ansatz umgesetzt. Dieser Lösungsansatz nutzt Bayes'sche Netzwerke zur Erkennung einzelner Objekte in der Umgebung. Damit kann gezeigt werden, dass mit der Hilfe von a priori Informationen die dem System zugrunde liegenden Seekarten sehr gut zu diesem Zweck genutzt werden können. Die Analyse des Systemkonzeptes zeigt des weiteren, dass die Datenstruktur der Seekarten für die Anforderungen einer interaktiven, benutzergeführten 3D- Augmentierungsszene angepasst werden müssen. Nach der ausführlichen Analyse des Seekarten-Datenkonzeptes wird ein Lösungsansatz zur Optimierung der internen Seekartenstruktur aufgezeigt. Dies wird mit der Erstellung einer Objekt-zu-Objekt-Topologie in der Datenstruktur und der Verbindung zum Bayes'schen Objekterkennungsalgorithmus umgesetzt. Anschließend zeigen Evaluationen die Fähigkeiten des endgültigen Systems

    Simple low cost causal discovery using mutual information and domain knowledge

    Get PDF
    PhDThis thesis examines causal discovery within datasets, in particular observational datasets where normal experimental manipulation is not possible. A number of machine learning techniques are examined in relation to their use of knowledge and the insights they can provide regarding the situation under study. Their use of prior knowledge and the causal knowledge produced by the learners are examined. Current causal learning algorithms are discussed in terms of their strengths and limitations. The main contribution of the thesis is a new causal learner LUMIN that operates with a polynomial time complexity in both the number of variables and records examined. It makes no prior assumptions about the form of the relationships and is capable of making extensive use of available domain information. This learner is compared to a number of current learning algorithms and it is shown to be competitive with them

    Supply chain integration model: practices and customer values

    Get PDF
    Dissertation to obtain PhD in Industrial EngineeringIn order to increase partnership efficiency and truly meet the customers' demands, in today's business environment companies are operating in supply chains. Integration of supply chains facilitates minimizing diferent types of wastes and satisfying needs of the end customer. The first step toward supply chain integration is to understandand the customer values, and to reconfigure supply chain to support those values. The current research addresses supply chain integration through quantifying relations between supply chain practice and customer values. It employs Bayesian network and analytic network process as tools to quantify comparative relations among entities. The proposed approach starts with identifying trade-offs along customer values using Bayesian network. In parallel supply chain practices are comparatively analyzed through interviews with experts which is technically quantified using analytic network process. Thereafter, these two parallel phases join together to form a network of customer values and supply chain practices. The network is able to quantitatively identify relations among nodes; in addition, it can be used to plan scenarios and handle senstitivity analyses. This model is expected to be used by supply chain decision makers to have a quantitative measure for monitoring the influence of practices on preferences of the end customer. A survey and two case studies are discussed which go through aforementioned phases. The survey identifies and analyzes six customer values namely quality, cost, customization, time, know-how and respect for the environment. It makes input for the two cases which develop supply chain integration model for fashion and food industry. Supply chain practices are categorized into two groups of manufacturing and logistics practices. The two case studies include five manufacturing practices as cross functional operations, decrease work in process, implement standards, mixed production planning, and use recyclable materials as well as four logistics practices namely visibility to upstream /downstream inventories, information sharing with customer, implement logistics standards, and just in time.Fundação para a Ciência e Tecnologia - (MIT Project: MIT-Pt/EDAM-IASC/0022/2008

    ISIPTA'07: Proceedings of the Fifth International Symposium on Imprecise Probability: Theories and Applications

    Get PDF
    B
    corecore