6,586 research outputs found

    A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    Get PDF
    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion

    Human Motion Trajectory Prediction: A Survey

    Full text link
    With growing numbers of intelligent autonomous systems in human environments, the ability of such systems to perceive, understand and anticipate human behavior becomes increasingly important. Specifically, predicting future positions of dynamic agents and planning considering such predictions are key tasks for self-driving vehicles, service robots and advanced surveillance systems. This paper provides a survey of human motion trajectory prediction. We review, analyze and structure a large selection of work from different communities and propose a taxonomy that categorizes existing methods based on the motion modeling approach and level of contextual information used. We provide an overview of the existing datasets and performance metrics. We discuss limitations of the state of the art and outline directions for further research.Comment: Submitted to the International Journal of Robotics Research (IJRR), 37 page

    Multimodal Observation and Interpretation of Subjects Engaged in Problem Solving

    Get PDF
    In this paper we present the first results of a pilot experiment in the capture and interpretation of multimodal signals of human experts engaged in solving challenging chess problems. Our goal is to investigate the extent to which observations of eye-gaze, posture, emotion and other physiological signals can be used to model the cognitive state of subjects, and to explore the integration of multiple sensor modalities to improve the reliability of detection of human displays of awareness and emotion. We observed chess players engaged in problems of increasing difficulty while recording their behavior. Such recordings can be used to estimate a participant's awareness of the current situation and to predict ability to respond effectively to challenging situations. Results show that a multimodal approach is more accurate than a unimodal one. By combining body posture, visual attention and emotion, the multimodal approach can reach up to 93% of accuracy when determining player's chess expertise while unimodal approach reaches 86%. Finally this experiment validates the use of our equipment as a general and reproducible tool for the study of participants engaged in screen-based interaction and/or problem solving

    Towards Tutoring an Interactive Robot

    Get PDF
    Wrede B, Rohlfing K, Spexard TP, Fritsch J. Towards tutoring an interactive robot. In: Hackel M, ed. Humanoid Robots, Human-like Machines. ARS; 2007: 601-612.Many classical approaches developed so far for learning in a human-robot interaction setting have focussed on rather low level motor learning by imitation. Some doubts, however, have been casted on whether with this approach higher level functioning will be achieved. Higher level processes include, for example, the cognitive capability to assign meaning to actions in order to learn from the tutor. Such capabilities involve that an agent not only needs to be able to mimic the motoric movement of the action performed by the tutor. Rather, it understands the constraints, the means and the goal(s) of an action in the course of its learning process. Further support for this hypothesis comes from parent-infant instructions where it has been observed that parents are very sensitive and adaptive tutors who modify their behavior to the cognitive needs of their infant. Based on these insights, we have started our research agenda on analyzing and modeling learning in a communicative situation by analyzing parent-infant instruction scenarios with automatic methods. Results confirm the well known observation that parents modify their behavior when interacting with their infant. We assume that these modifications do not only serve to keep the infant’s attention but do indeed help the infant to understand the actual goal of an action including relevant information such as constraints and means by enabling it to structure the action into smaller, meaningful chunks. We were able to determine first objective measurements from video as well as audio streams that can serve as cues for this information in order to facilitate learning of actions

    AR-Enhanced Human-Robot-Interaction - Methodologies, Algorithms, Tools

    Get PDF
    By using Augmented Reality in Human-Robot-Interaction scenariospropose it is possible to improve training, programming, maintenance and process monitoring. AR Enhanced Human Robot Interaction means it is possible to conduct activities not only in a training facility with physical robot(s) but also in a complete virtual environment. By using virtual environments only a computer and possibly Head Mounting Display is required. This will reduce the bottlenecks for with overbooked physical training facilities. Physical environment for the activities with robot(s) will still be required, however using also virtual environments will increase flexibility and human operator can focus on training more complicated tasks. (C) 2016 The Authors. Published by Elsevier B.V.Partially funded by FP7 EU project LIAA (http://www.project- leanautomation.eu/

    Spatial Programming for Industrial Robots through Task Demonstration

    Get PDF
    We present an intuitive system for the programming of industrial robots using markerless gesture recognition and mobile augmented reality in terms of programming by demonstration. The approach covers gesture-based task definition and adaption by human demonstration, as well as task evaluation through augmented reality. A 3D motion tracking system and a handheld device establish the basis for the presented spatial programming system. In this publication, we present a prototype toward the programming of an assembly sequence consisting of several pick-and-place tasks. A scene reconstruction provides pose estimation of known objects with the help of the 2D camera of the handheld. Therefore, the programmer is able to define the program through natural bare-hand manipulation of these objects with the help of direct visual feedback in the augmented reality application. The program can be adapted by gestures and transmitted subsequently to an arbitrary industrial robot controller using a unified interface. Finally, we discuss an application of the presented spatial programming approach toward robot-based welding tasks
    • …
    corecore