16 research outputs found

    Semantic modeling of cell damage prediction: a machine learning approach at human-level performance in dermatology

    No full text
    Abstract Machine learning is transforming the field of histopathology. Especially in classification related tasks, there have been many successful applications of deep learning already. Yet, in tasks that rely on regression and many niche applications, the domain lacks cohesive procedures that are adapted to the learning processes of neural networks. In this work, we investigate cell damage in whole slide images of the epidermis. A common way for pathologists to annotate a score, characterizing the degree of damage for these samples, is the ratio between healthy and unhealthy nuclei. The annotation procedure of these scores, however, is expensive and prone to be noisy among pathologists. We propose a new measure of damage, that is the total area of damage, relative to the total area of the epidermis. In this work, we present results of regression and segmentation models, predicting both scores on a curated and public dataset. We have acquired the dataset in collaborative efforts with medical professionals. Our study resulted in a comprehensive evaluation of the proposed damage metrics in the epidermis, with recommendations, emphasizing practical relevance for real world applications

    Semi-supervised Learning From Demonstration through Program Synthesis: An Inspection Robot Case Study

    Get PDF
    Semi-supervised learning improves the performance of supervised machine learning by leveraging methods from unsupervised learning to extract information not explicitly available in the labels. Through the design of a system that enables a robot to learn inspection strategies from a human operator, we present a hybrid semi-supervised system capable of learning interpretable and verifiable models from demonstrations. The system induces a controller program by learning from immersive demonstrations using sequential importance sampling. These visual servo controllers are parametrised by proportional gains and are visually verifiable through observation of the position of the robot in the environment. Clustering and effective particle size filtering allows the system to discover goals in the state space. These goals are used to label the original demonstration for end-to-end learning of behavioural models. The behavioural models are used for autonomous model predictive control and scrutinised for explanations. We implement causal sensitivity analysis to identify salient objects and generate counterfactual conditional explanations. These features enable decision making interpretation and post hoc discovery of the causes of a failure. The proposed system expands on previous approaches to program synthesis by incorporating repellers in the attribution prior of the sampling process. We successfully learn the hybrid system from an inspection scenario where an unmanned ground vehicle has to inspect, in a specific order, different areas of the environment. The system induces an interpretable computer program of the demonstration that can be synthesised to produce novel inspection behaviours. Importantly, the robot successfully runs the synthesised program on an unseen configuration of the environment while presenting explanations of its autonomous behaviour.Comment: In Proceedings AREA 2020, arXiv:2007.1126
    corecore