3 research outputs found

    What am I allowed to do here?: Online Learning of Context-Specific Norms by Pepper

    Full text link
    Social norms support coordination and cooperation in society. With social robots becoming increasingly involved in our society, they also need to follow the social norms of the society. This paper presents a computational framework for learning contexts and the social norms present in a context in an online manner on a robot. The paper utilizes a recent state-of-the-art approach for incremental learning and adapts it for online learning of scenes (contexts). The paper further utilizes Dempster-Schafer theory to model context-specific norms. After learning the scenes (contexts), we use active learning to learn related norms. We test our approach on the Pepper robot by taking it through different scene locations. Our results show that Pepper can learn different scenes and related norms simply by communicating with a human partner in an online manner.Comment: The final authenticated publication is available online at https://doi.org/10.1007/978-3-030-62056-1_1

    Learning to Reduce Annotation Load

    Get PDF
    Modern machine learning methods and their applications in computer vision are known to crave for large amounts of training data to reach their full potential. Because training data is mostly obtained through humans who manually label samples, it induces a significant cost. Therefore, the problem of reducing the annotation load is of great importance for the success of machine learning methods. We study the problem of reducing the annotation load from two viewpoints, by answering the questions âWhat to annotate?â and âHow to annotate?â. The question âWhat?â addresses the selection of a small portion of the data that would be sufficient to train an accurate model. The question âHow? focuses on minimising the effort of labelling each datapoint. The question âWhat to annotate?â becomes particularly compelling if we can select data to be annotated in an iterative and adaptive way, a setting known as active learning (AL). The key challenge in AL is to identify the datapoints that are the most informative for the model at a given stage. We propose several techniques to address this challenge. Firstly, we consider the problem of segmenting natural images and image volumes. We take advantage of image priors, such as smoothness of objects of interest, and use them in a novel form of geometric uncertainty. Using this, we design an AL technique to efficiently annotate data that is tailored to segmentation applications. Next, we notice that no single manually-designed strategy outperforms others in every application and that often the burden of designing new strategies outweighs the benefits of AL. To overcome this problem we suggest learning an AL strategy from data by formulating the AL problem as a regression task that predicts the reduction in the generalisation error achieved by labelling each datapoint. This enables us to learn AL strategies from simulated data and to transfer them to new datasets. Finally, we turn towards non-myopic data-driven AL strategies. To this end, we formulate the AL problem as a Markov decision process and find the best selection policy using reinforcement learning. We design the decision process such that the policy can be learnt for any ML model and transferred to diverse application domains. Effectively addressing the question âHow to annotate?â is of no less importance as large cost savings can be achieved by labelling each datapoint more efficiently. This can be done with intelligent interfaces that interact with a human annotator. We make two contributions towards answering the question âHow?â. Firstly, we propose an efficient technique to annotate 3D image volumes for image segmentation. Annotating data in 3D is cumbersome and an obvious way to facilitate it is to select a subset of the data lying on a 2D plane. To find the optimal plane (i.e. the one containing the most informative datapoints) we design a branch-and-bound algorithm that quickly eliminates hypotheses about the optimal projection. Secondly, we propose an intelligent data annotation method to train object detectors. Instead of always asking the human annotator to draw bounding boxes in images, we detect automatically in which cases we can rely on the current detector and verify its proposal
    corecore