121 research outputs found

    Towards Tutoring an Interactive Robot

    Get PDF
    Wrede B, Rohlfing K, Spexard TP, Fritsch J. Towards tutoring an interactive robot. In: Hackel M, ed. Humanoid Robots, Human-like Machines. ARS; 2007: 601-612.Many classical approaches developed so far for learning in a human-robot interaction setting have focussed on rather low level motor learning by imitation. Some doubts, however, have been casted on whether with this approach higher level functioning will be achieved. Higher level processes include, for example, the cognitive capability to assign meaning to actions in order to learn from the tutor. Such capabilities involve that an agent not only needs to be able to mimic the motoric movement of the action performed by the tutor. Rather, it understands the constraints, the means and the goal(s) of an action in the course of its learning process. Further support for this hypothesis comes from parent-infant instructions where it has been observed that parents are very sensitive and adaptive tutors who modify their behavior to the cognitive needs of their infant. Based on these insights, we have started our research agenda on analyzing and modeling learning in a communicative situation by analyzing parent-infant instruction scenarios with automatic methods. Results confirm the well known observation that parents modify their behavior when interacting with their infant. We assume that these modifications do not only serve to keep the infant’s attention but do indeed help the infant to understand the actual goal of an action including relevant information such as constraints and means by enabling it to structure the action into smaller, meaningful chunks. We were able to determine first objective measurements from video as well as audio streams that can serve as cues for this information in order to facilitate learning of actions

    Robots show us how to teach them: Feedback from robots shapes tutoring behavior during action learning

    Get PDF
    Vollmer A-L, Mühlig M, Steil JJ, et al. Robots show us how to teach them: Feedback from robots shapes tutoring behavior during action learning. PLoS ONE. 2014;9(3): e91349.Robot learning by imitation requires the detection of a tutor's action demonstration and its relevant parts. Current approaches implicitly assume a unidirectional transfer of knowledge from tutor to learner. The presented work challenges this predominant assumption based on an extensive user study with an autonomously interacting robot. We show that by providing feedback, a robot learner influences the human tutor's movement demonstrations in the process of action learning. We argue that the robot's feedback strongly shapes how tutors signal what is relevant to an action and thus advocate a paradigm shift in robot action learning research toward truly interactive systems learning in and benefiting from interaction

    Tutoring in adult-child-interaction: On the loop of the tutor's action modification and the recipient's gaze

    Get PDF
    Pitsch K, Vollmer A-L, Rohlfing K, Fritsch J, Wrede B. Tutoring in adult-child-interaction: On the loop of the tutor's action modification and the recipient's gaze. Interaction Studies. 2014;15(1):55-98.Research of tutoring in parent-infant interaction has shown that tutors - when presenting some action - modify both their verbal and manual performance for the learner (‘motherese’, ‘motionese’). Investigating the sources and effects of the tutors’ action modifications, we suggest an interactional account of ‘motionese’. Using video-data from a semi-experimental study in which parents taught their 8 to 11 month old infants how to nest a set of differently sized cups, we found that the tutors’ action modifications (in particular: high arches) functioned as an orienting device to guide the infant’s visual attention (gaze). Action modification and the recipient’s gaze can be seen to have a reciprocal sequential relationship and to constitute a constant loop of mutual adjustments. Implications are discussed for developmental research and for robotic ‘Social Learning’. We argue that a robot system could use on-line feedback strategies (e.g. gaze) to pro-actively shape a tutor’s action presentation as it emerges

    Biased Competition in Visual Processing Hierarchies: A Learning Approach Using Multiple Cues

    Get PDF
    In this contribution, we present a large-scale hierarchical system for object detection fusing bottom-up (signal-driven) processing results with top-down (model or task-driven) attentional modulation. Specifically, we focus on the question of how the autonomous learning of invariant models can be embedded into a performing system and how such models can be used to define object-specific attentional modulation signals. Our system implements bi-directional data flow in a processing hierarchy. The bottom-up data flow proceeds from a preprocessing level to the hypothesis level where object hypotheses created by exhaustive object detection algorithms are represented in a roughly retinotopic way. A competitive selection mechanism is used to determine the most confident hypotheses, which are used on the system level to train multimodal models that link object identity to invariant hypothesis properties. The top-down data flow originates at the system level, where the trained multimodal models are used to obtain space- and feature-based attentional modulation signals, providing biases for the competitive selection process at the hypothesis level. This results in object-specific hypothesis facilitation/suppression in certain image regions which we show to be applicable to different object detection mechanisms. In order to demonstrate the benefits of this approach, we apply the system to the detection of cars in a variety of challenging traffic videos. Evaluating our approach on a publicly available dataset containing approximately 3,500 annotated video images from more than 1 h of driving, we can show strong increases in performance and generalization when compared to object detection in isolation. Furthermore, we compare our results to a late hypothesis rejection approach, showing that early coupling of top-down and bottom-up information is a favorable approach especially when processing resources are constrained

    Visio-spatial road boundary detection for unmarked urban and rural roads

    No full text
    Kuhnl T, Fritsch J. Visio-spatial road boundary detection for unmarked urban and rural roads. In: 2014 IEEE Intelligent Vehicles Symposium Proceedings. Institute of Electrical and Electronics Engineers (IEEE); 2014

    Visio-spatial road boundary detection for unmarked urban and rural roads

    Get PDF
    Kuhnl T, Fritsch J. Visio-spatial road boundary detection for unmarked urban and rural roads. In: 2014 IEEE Intelligent Vehicles Symposium Proceedings. Institute of Electrical and Electronics Engineers (IEEE); 2014

    Learning to Manipulate Objects: A Quantitative Evaluation of Motionese

    No full text
    Rohlfing K, Fritsch J, Wrede B. Learning to Manipulate Objects: A Quantitative Evaluation of Motionese. In: Third International Conference on Development and Learning (ICDL 2004). La Jolla, CA; 2004: 27.One dream of robotics research is to build robot companions that can interact outside the lab in real world environments such as private homes. There has been good progress on many components needed for such a robot companion, but only few systems are documented in the literature that actually integrate a larger number of components leading to a more natural and human-like interaction with such a robot. However, only the integration of many components on the same robot allows us to study embodied interaction and leads to new insights on how to improve the overall appearance of such a robot companion. Towards this end, we present the Bielefeld Robot Companion BIRON as an integration platform for studying embodied interaction. Reporting different stages of the alternating development and evaluation process, we argue that an integrated and actually running system is necessary to assess human needs and demands under real life conditions and to determine what functions are still missing. This interplay between evaluation and development stimulates the development process as well as the design of appropriate evaluation metrics. Moreover, such constant evaluations of the system help identify problematic aspects that need to be solved before sophisticated robot companions can be successfully evaluated in long-term user studies
    corecore