113 research outputs found

    Generating Coherent and Informative Descriptions for Groups of Visual Objects and Categories: A Simple Decoding Approach

    No full text
    Attari N, Schlangen D, Heckmann M, Wersing H, ZarrieĂź S. Generating Coherent and Informative Descriptions for Groups of Visual Objects and Categories: A Simple Decoding Approach. In: Proceedings of the 15th International Conference on Natural Language Generation. Waterville, Maine, USA and virtual meeting: Association for Computational Linguistics; 2022: 110-120

    Interpretable locally adaptive nearest neighbors

    No full text
    Göpfert JP, Wersing H, Hammer B. Interpretable locally adaptive nearest neighbors. Neurocomputing. 2022;470:344-351.When training automated systems, it has been shown to be beneficial to adapt the representation of data by learning a problem-specific metric. This metric is global. We extend this idea and, for the widely used family of k nearest neighbors algorithms, develop a method that allows learning locally adaptive metrics. These local metrics not only improve performance, but are naturally interpretable. To demonstrate important aspects of how our approach works, we conduct a number of experiments on synthetic data sets, and we show its usefulness on real-world benchmark data sets. (c) 2021 Elsevier B.V. All rights reserved

    Intuitiveness in Active Teaching

    No full text
    Göpfert JP, Kuhl U, Hindemith L, Wersing H, Hammer B. Intuitiveness in Active Teaching. IEEE Transactions on Human-Machine Systems. 2021:1-10

    Addressing Data Scarcity in Multimodal User State Recognition by Combining Semi-Supervised and Supervised Learning

    No full text
    VoĂź H, Wersing H, Kopp S. Addressing Data Scarcity in Multimodal User State Recognition by Combining Semi-Supervised and Supervised Learning. In: Hammal Z, ed. Companion Publication of the 2021 International Conference on Multimodal Interaction. New York, NY: Association for Computing Machinery ; 2021: 317-323.Detecting mental states of human users is crucial for the development of cooperative and intelligent robots, as it enables the robot to understand the user's intentions and desires. Despite their importance, it is difficult to obtain a large amount of high quality data for training automatic recognition algorithms as the time and effort required to collect and label such data is prohibitively high. In this paper we present a multimodal machine learning approach for detecting dis-/agreement and confusion states in a human-robot interaction environment, using just a small amount of manually annotated data. We collect a data set by conducting a human-robot interaction study and develop a novel preprocessing pipeline for our machine learning approach. By combining semi-supervised and supervised architectures, we are able to achieve an average F1-score of 81.1\% for dis-/agreement detection with a small amount of labeled data and a large unlabeled data set, while simultaneously increasing the robustness of the model compared to the supervised approach

    Feeling uncertain-effects of a vibrotactile belt that communicates vehicle sensor uncertainty

    No full text
    With the rise of partially automated cars, drivers are more and more required to judge the degree of responsibility that can be delegated to vehicle assistant systems. This can be supported by utilizing interfaces that intuitively convey real-time reliabilities of system functions such as environment sensing. We designed a vibrotactile interface that communicates spatiotemporal information about surrounding vehicles and encodes a representation of spatial uncertainty in a novel way. We evaluated this interface in a driving simulator experiment with high and low levels of human and machine confidence respectively caused by simulated degraded vehicle sensor precision and limited human visibility range. Thereby we were interested in whether drivers (i) could perceive and understand the vibrotactile encoding of spatial uncertainty, (ii) would subjectively benefit from the encoded information, (iii) would be disturbed in cases of information redundancy, and (iv) would gain objective safety benefits from the encoded information. To measure subjective understanding and benefit, a custom questionnaire, Van der Laan acceptance ratings and NASA TLX scores were used. To measure the objective benefit, we computed the minimum time-to-contact as a measure of safety and gaze distributions as an indicator for attention guidance. Results indicate that participants were able to understand the encoded uncertainty and spatiotemporal information and purposefully utilized it when needed. The tactile interface provided meaningful support despite sensory restrictions. By encoding spatial uncertainties, it successfully extended the operating range of the assistance system.Human-Robot Interactio

    Beyond Cross-Validation—Accuracy Estimation for Incremental and Active Learning Models

    No full text
    Limberg C, Wersing H, Ritter H. Beyond Cross-Validation—Accuracy Estimation for Incremental and Active Learning Models. Machine Learning and Knowledge Extraction. 2020;2(3):327-346.For incremental machine-learning applications it is often important to robustly estimate the system accuracy during training, especially if humans perform the supervised teaching. Cross-validation and interleaved test/train error are here the standard supervised approaches. We propose a novel semi-supervised accuracy estimation approach that clearly outperforms these two methods. We introduce the Configram Estimation (CGEM) approach to predict the accuracy of any classifier that delivers confidences. By calculating classification confidences for unseen samples, it is possible to train an offline regression model, capable of predicting the classifier’s accuracy on novel data in a semi-supervised fashion. We evaluate our method with several diverse classifiers and on analytical and real-world benchmark data sets for both incremental and active learning. The results show that our novel method improves accuracy estimation over standard methods and requires less supervised training data after deployment of the model. We demonstrate the application of our approach to a challenging robot object recognition task, where the human teacher can use our method to judge sufficient training

    Accuracy Estimation for an Incrementally Learning Cooperative Inventory Assistant Robot

    No full text
    Limberg C, Wersing H, Ritter H. Accuracy Estimation for an Incrementally Learning Cooperative Inventory Assistant Robot. In: Yang H, Pasupa K, Leung AC-S, Kwok JT, Chan JH, King I, eds. Neural Information Processing. 27th International Conference, ICONIP 2020, Bangkok, Thailand, November 23–27, 2020, Proceedings, Part II. Lecture Notes in Computer Science. Vol 12533. Cham: Springer International Publishing; 2020: 738-749.Interactive teaching from a human can be applied to extend the knowledge of a service robot according to novel task demands. This is particularly attractive if it is either inefficient or not feasible to pre-train all relevant object knowledge beforehand. Like in a normal human teacher and student situation it is then vital to estimate the learning progress of the robot in order to judge its competence in carrying out the desired task. While observing robot task success and failure is a straightforward option, there are more efficient alternatives. In this contribution we investigate the application of a recent semi-supervised confidence-based approach to accuracy estimation towards incremental object learning for an inventory assistant robot. We evaluate the approach and demonstrate its applicability in a slightly simplified, but realistic setting. We show that the configram estimation model (CGEM) outperforms standard approaches for accuracy estimation like cross-validation and interleaved test/train error for active learning scenarios, thus minimizing human training effort

    Prototype-Based Online Learning on Homogeneously Labeled Streaming Data

    No full text
    Limberg C, Göpfert JP, Wersing H, Ritter H. Prototype-Based Online Learning on Homogeneously Labeled Streaming Data. In: Farkaš I, Masulli P, Wermter S, eds. Artificial Neural Networks and Machine Learning – ICANN 2020. 29th International Conference on Artificial Neural Networks, Bratislava, Slovakia, September 15–18, 2020, Proceedings, Part II. Lecture Notes in Computer Science. Vol 12397. Cham: Springer International Publishing; 2020: 204-213.Algorithms in machine learning commonly require training data to be independent and identically distributed. This assumption is not always valid, e. g. in online learning, when data becomes available in homogeneously labeled blocks, which can severely impede especially instance-based learning algorithms. In this work, we analyze and visualize this issue, and we propose and evaluate strategies for Learning Vector Quantization to compensate for homogeneously labeled blocks. We achieve considerably improved results in this difficult setting

    Adversarial attacks hidden in plain sight

    No full text
    Göpfert JP, Wersing H, Hammer B. Adversarial attacks hidden in plain sight. 2019.Convolutional neural networks have been used to achieve a string of successes during recent years, but their lack of interpretability remains a serious issue. Adversarial examples are designed to deliberately fool neural networks into making any desired incorrect classification, potentially with very high certainty. We underline the severity of the issue by presenting a technique that allows to hide such adversarial attacks in regions of high complexity, such that they are imperceptible even to an astute observer
    • …