25 research outputs found

    Tutoring in adult-child-interaction: On the loop of the tutor's action modification and the recipient's gaze

    Get PDF
    Pitsch K, Vollmer A-L, Rohlfing K, Fritsch J, Wrede B. Tutoring in adult-child-interaction: On the loop of the tutor's action modification and the recipient's gaze. Interaction Studies. 2014;15(1):55-98.Research of tutoring in parent-infant interaction has shown that tutors - when presenting some action - modify both their verbal and manual performance for the learner (‘motherese’, ‘motionese’). Investigating the sources and effects of the tutors’ action modifications, we suggest an interactional account of ‘motionese’. Using video-data from a semi-experimental study in which parents taught their 8 to 11 month old infants how to nest a set of differently sized cups, we found that the tutors’ action modifications (in particular: high arches) functioned as an orienting device to guide the infant’s visual attention (gaze). Action modification and the recipient’s gaze can be seen to have a reciprocal sequential relationship and to constitute a constant loop of mutual adjustments. Implications are discussed for developmental research and for robotic ‘Social Learning’. We argue that a robot system could use on-line feedback strategies (e.g. gaze) to pro-actively shape a tutor’s action presentation as it emerges

    Developmental Human-Robot Imitation Learning of Drawing with a Neuro Dynamical System

    Full text link
    Abstract—This paper mainly deals with influences of teach-ing style and developmental processes in learning model to the acquired representations (primitives). We investigate these in-fluences by introducing a hierarchical recurrent neural network for robot model, and a form of motionese (a caregiver’s use of simpler and more exaggerated motions when showing a task to an infants). We modified a Multiple Timescales Recurrent Neural Network (MTRNN) for robot’s self-model. The number of layers in the MTRNN increases according to learn complex events. We investigate our approach with a humanoid robot “Actroid ” through conducting an imitation experiment in which a human caregiver gives the robot a task of pushing two buttons. Experiment results and analysis confirm that learning with phased teaching and structuring enables to acquire the clear motion primitives as the activities in the fast context layer of MTRNN and to the robot to handle unknown motions. I

    Measurement and analysis of interactive behavior in tutoring action with children and robots

    Get PDF
    Vollmer A-L. Measurement and analysis of interactive behavior in tutoring action with children and robots. Bielefeld: Universität Bielefeld; 2011.Robotics research is increasingly addressing the issue of enabling robots to learn in social interaction. In contrast to the traditional approach by which robots are programmed by experts and prepared for and restricted to one specific purpose, they are now envisioned as general-purpose machines that should be able to carry out different tasks and thus solve various problems in everyday environments. Robots which are able to learn novel actions in social interaction with a human tutor would have many advantages. Unexperienced users could "program" new skills for a robot simply by demonstrating them. Children are able to rapidly learn in social interaction. Modifications in tutoring behavior toward children ("motionese") are assumed to assist their learning processes. Similar to small children, robots do not have much experience of the world and thus could make use of this beneficial natural tutoring behavior if it was employed, when tutoring them. To achieve this goal, the thesis provides theoretical background on imitation learning as a central field of social learning, which has received much attention in robotics and develops new interdisciplinary methods to measure interactive behavior. Based on this background, tutoring behavior is examined in adult-child, adult-adult, and adult-robot interactions by applying the developed methods. The findings reveal that the learner’s feedback is a constituent part of the natural tutoring interaction and shapes the tutor’s demonstration behavior. The work provides an insightful understanding of interactional patterns and processes. From this it derives feedback strategies for human-robot tutoring interactions, with which a robot could prompt hand movement modifications during the tutor’s action demonstration by using its gaze, enabling robots to elicit advantageous modifications of the tutor’s behavior

    Robot feedback shapes the tutor's presentation. How a robot's online gaze strategies lead to micro-adaptation of the human's conduct

    Get PDF
    Pitsch K, Vollmer A-L, Muehlig M. Robot feedback shapes the tutor's presentation. How a robot's online gaze strategies lead to micro-adaptation of the human's conduct. Interaction Studies. 2013;14(2):268-296.The paper investigates the effects of a humanoid robot's online feedback during a tutoring situation in which a human demonstrates how to make a frog jump across a table. Motivated by micro-analytic studies of adult-child-interaction, we investigated whether tutors react to a robot's gaze strategies while they are presenting an action. And if so, how they would adapt to them. Analysis reveals that tutors adjust typical "motionese" parameters (pauses, speed, and height of motion). We argue that a robot - when using adequate online feedback strategies - has at its disposal an important resource with which it could pro-actively shape the tutor's presentation and help generate the input from which it would benefit most. These results advance our understanding of robotic "Social Learning" in that they suggest to consider human and robot as one interactional learning system

    A humanoid robot’s effortful adaptation boosts partners’ commitment to an interactive teaching task

    Get PDF
    We tested the hypothesis that, if a robot apparently invests effort in teaching a new skill to a human participant, the human participant will reciprocate by investing more effort in teaching the robot a new skill, too. To this end, we devised a scenario in which the iCub and a human participant alternated in teaching each other new skills. In the Adaptive condition of the robot teaching phase , the iCub slowed down its movements when repeating a demonstration for the human learner, whereas in the Unadaptive condition it sped the movements up when repeating the demonstration. In a subsequent participant teaching phase , human participants were asked to give the iCub a demonstration, and then to repeat it if the iCub had not understood. We predicted that in the Adaptive condition , participants would reciprocate the iCub’s adaptivity by investing more effort to slow down their movements and to increase segmentation when repeating their demonstration. The results showed that this was true when participants experienced the Adaptive condition after the Unadaptive condition and not when the order was inverted, indicating that participants were particularly sensitive to the changes in the iCub’s level of commitment over the course of the experiment

    A multimodal corpus for the evaluation of computational models for (grounded) language acquisition

    Get PDF
    Gaspers J, Panzner M, Lemme A, Cimiano P, Rohlfing K, Wrede S. A multimodal corpus for the evaluation of computational models for (grounded) language acquisition. In: EACL Workshop on Cognitive Aspects of Computational Language Learning. 2014

    Towards gestural understanding for intelligent robots

    Get PDF
    Fritsch JN. Towards gestural understanding for intelligent robots. Bielefeld: Universität Bielefeld; 2012.A strong driving force of scientific progress in the technical sciences is the quest for systems that assist humans in their daily life and make their life easier and more enjoyable. Nowadays smartphones are probably the most typical instances of such systems. Another class of systems that is getting increasing attention are intelligent robots. Instead of offering a smartphone touch screen to select actions, these systems are intended to offer a more natural human-machine interface to their users. Out of the large range of actions performed by humans, gestures performed with the hands play a very important role especially when humans interact with their direct surrounding like, e.g., pointing to an object or manipulating it. Consequently, a robot has to understand such gestures to offer an intuitive interface. Gestural understanding is, therefore, a key capability on the way to intelligent robots. This book deals with vision-based approaches for gestural understanding. Over the past two decades, this has been an intensive field of research which has resulted in a variety of algorithms to analyze human hand motions. Following a categorization of different gesture types and a review of other sensing techniques, the design of vision systems that achieve hand gesture understanding for intelligent robots is analyzed. For each of the individual algorithmic steps – hand detection, hand tracking, and trajectory-based gesture recognition – a separate Chapter introduces common techniques and algorithms and provides example methods. The resulting recognition algorithms are considering gestures in isolation and are often not sufficient for interacting with a robot who can only understand such gestures when incorporating the context like, e.g., what object was pointed at or manipulated. Going beyond a purely trajectory-based gesture recognition by incorporating context is an important prerequisite to achieve gesture understanding and is addressed explicitly in a separate Chapter of this book. Two types of context, user-provided context and situational context, are reviewed and existing approaches to incorporate context for gestural understanding are reviewed. Example approaches for both context types provide a deeper algorithmic insight into this field of research. An overview of recent robots capable of gesture recognition and understanding summarizes the currently realized human-robot interaction quality. The approaches for gesture understanding covered in this book are manually designed while humans learn to recognize gestures automatically during growing up. Promising research targeted at analyzing developmental learning in children in order to mimic this capability in technical systems is highlighted in the last Chapter completing this book as this research direction may be highly influential for creating future gesture understanding systems

    Visual Attention for Robotic Cognition: A Biologically Inspired Probabilistic Architecture

    Get PDF
    The human being, the most magnificent autonomous entity in the universe, frequently takes the decision of `what to look at' in their day-to-day life without even realizing the complexities of the underlying process. When it comes to the design of such an attention system for autonomous robots, all of a sudden this apparently simple task appears to be an extremely complex one with highly dynamic interaction among motor skills, knowledge and experience developed throughout the life-time, highly connected circuitry of the visual cortex, and super-fast timing. The most fascinating thing about visual attention system of the primates is that the underlying mechanism is not precisely known yet. Different influential theories and hypothesis regarding this mechanism, however, are being proposed in psychology and neuroscience. These theories and hypothesis have encouraged the research on synthetic modeling of visual attention in computer vision, computational neuroscience and, very recently, in AI robotics. The major motivation behind the computational modeling of visual attention is two-fold: understanding the mechanism underlying the cognition of the primates' and using the principle of focused attention in different real-world applications, e.g. in computer vision, surveillance, and robotics. Accordingly, we observe the rise of two different trends in the computational modeling of visual attention. The first one is mostly focused on developing mathematical models which mimic, as much as possible, the details of the primates' attention system: the structure, the connectivity among visual neurons and different regions of the visual cortex, the flow of information etc. Such models provide a way to test the theories of the primates' visual attention with minimal involvement from the live subjects. This is a magnificent way to use technological advancement for the understanding of human cognition. The second trend in computational modeling, on the other hand, uses the methodological sophistication of the biological processes (like visual attention) to advance the technology. These models are mostly concerned with developing a technical system of visual attention which can be used in real-world applications where the principle of focused attention might play a significant role for redundant information management. This thesis is focused on developing a computational model of visual attention for robotic cognition and, therefore, belongs to the second trend. The design of a visual attention model for robotic systems as a component of their cognition comes with a number of challenges which, generally, do not appear in the traditional computer vision applications of visual attention. The robotic models of visual attention, although heavily inspired by the rich literature of visual attention in computer vision, adopt different measures to cope with these challenges. This thesis proposes a Bayesian model of visual attention designed specifically for robotic systems and, therefore, tackles the challenges involved with robotic visual attention. The operation of the proposed model is guided by the theory of biased competition, a popular theory from cognitive neuroscience describing the mechanism of primates' visual attention. The proposed Bayesian attention model offers a robot-centric approach of visual attention where the head-pose of a robot in the 3D world is estimated recursively such that the robot can focus on the most behaviorally relevant stimuli in its environment. The behavioral relevance of an object determined based on two criteria which are inspired by the postulates of the biased competitive hypothesis of visual attention in the primates. Accordingly, the proposed model encourages a robot to focus on novel stimuli or stimuli that have similarity with a `sought for' object depending on the context. In order to address a number of robot-specific issues of visual attention, the proposed model is further extended to the multi-modal case where speech commands from the human are used to modulate the visual attention behavior of the robot. The Bayes model of visual attention, inherited from the Bayesian sensor fusion characteristic, naturally accommodates multi-modal information during attention selection. This enables the proposed model to be the core component of an attention oriented speech-based human-robot interaction framework. Extensive experiments are performed in the real-world to investigate different aspects of the proposed Bayesian visual attention model
    corecore