121 research outputs found

    Towards Tutoring an Interactive Robot

    Get PDF
    Wrede B, Rohlfing K, Spexard TP, Fritsch J. Towards tutoring an interactive robot. In: Hackel M, ed. Humanoid Robots, Human-like Machines. ARS; 2007: 601-612.Many classical approaches developed so far for learning in a human-robot interaction setting have focussed on rather low level motor learning by imitation. Some doubts, however, have been casted on whether with this approach higher level functioning will be achieved. Higher level processes include, for example, the cognitive capability to assign meaning to actions in order to learn from the tutor. Such capabilities involve that an agent not only needs to be able to mimic the motoric movement of the action performed by the tutor. Rather, it understands the constraints, the means and the goal(s) of an action in the course of its learning process. Further support for this hypothesis comes from parent-infant instructions where it has been observed that parents are very sensitive and adaptive tutors who modify their behavior to the cognitive needs of their infant. Based on these insights, we have started our research agenda on analyzing and modeling learning in a communicative situation by analyzing parent-infant instruction scenarios with automatic methods. Results confirm the well known observation that parents modify their behavior when interacting with their infant. We assume that these modifications do not only serve to keep the infant’s attention but do indeed help the infant to understand the actual goal of an action including relevant information such as constraints and means by enabling it to structure the action into smaller, meaningful chunks. We were able to determine first objective measurements from video as well as audio streams that can serve as cues for this information in order to facilitate learning of actions

    Robots show us how to teach them: Feedback from robots shapes tutoring behavior during action learning

    Get PDF
    Vollmer A-L, MĂŒhlig M, Steil JJ, et al. Robots show us how to teach them: Feedback from robots shapes tutoring behavior during action learning. PLoS ONE. 2014;9(3): e91349.Robot learning by imitation requires the detection of a tutor's action demonstration and its relevant parts. Current approaches implicitly assume a unidirectional transfer of knowledge from tutor to learner. The presented work challenges this predominant assumption based on an extensive user study with an autonomously interacting robot. We show that by providing feedback, a robot learner influences the human tutor's movement demonstrations in the process of action learning. We argue that the robot's feedback strongly shapes how tutors signal what is relevant to an action and thus advocate a paradigm shift in robot action learning research toward truly interactive systems learning in and benefiting from interaction

    Tutoring in adult-child-interaction: On the loop of the tutor's action modification and the recipient's gaze

    Get PDF
    Pitsch K, Vollmer A-L, Rohlfing K, Fritsch J, Wrede B. Tutoring in adult-child-interaction: On the loop of the tutor's action modification and the recipient's gaze. Interaction Studies. 2014;15(1):55-98.Research of tutoring in parent-infant interaction has shown that tutors - when presenting some action - modify both their verbal and manual performance for the learner (‘motherese’, ‘motionese’). Investigating the sources and effects of the tutors’ action modifications, we suggest an interactional account of ‘motionese’. Using video-data from a semi-experimental study in which parents taught their 8 to 11 month old infants how to nest a set of differently sized cups, we found that the tutors’ action modifications (in particular: high arches) functioned as an orienting device to guide the infant’s visual attention (gaze). Action modification and the recipient’s gaze can be seen to have a reciprocal sequential relationship and to constitute a constant loop of mutual adjustments. Implications are discussed for developmental research and for robotic ‘Social Learning’. We argue that a robot system could use on-line feedback strategies (e.g. gaze) to pro-actively shape a tutor’s action presentation as it emerges

    Biased Competition in Visual Processing Hierarchies: A Learning Approach Using Multiple Cues

    Get PDF
    In this contribution, we present a large-scale hierarchical system for object detection fusing bottom-up (signal-driven) processing results with top-down (model or task-driven) attentional modulation. Specifically, we focus on the question of how the autonomous learning of invariant models can be embedded into a performing system and how such models can be used to define object-specific attentional modulation signals. Our system implements bi-directional data flow in a processing hierarchy. The bottom-up data flow proceeds from a preprocessing level to the hypothesis level where object hypotheses created by exhaustive object detection algorithms are represented in a roughly retinotopic way. A competitive selection mechanism is used to determine the most confident hypotheses, which are used on the system level to train multimodal models that link object identity to invariant hypothesis properties. The top-down data flow originates at the system level, where the trained multimodal models are used to obtain space- and feature-based attentional modulation signals, providing biases for the competitive selection process at the hypothesis level. This results in object-specific hypothesis facilitation/suppression in certain image regions which we show to be applicable to different object detection mechanisms. In order to demonstrate the benefits of this approach, we apply the system to the detection of cars in a variety of challenging traffic videos. Evaluating our approach on a publicly available dataset containing approximately 3,500 annotated video images from more than 1 h of driving, we can show strong increases in performance and generalization when compared to object detection in isolation. Furthermore, we compare our results to a late hypothesis rejection approach, showing that early coupling of top-down and bottom-up information is a favorable approach especially when processing resources are constrained

    Visio-spatial road boundary detection for unmarked urban and rural roads

    No full text
    Kuhnl T, Fritsch J. Visio-spatial road boundary detection for unmarked urban and rural roads. In: 2014 IEEE Intelligent Vehicles Symposium Proceedings. Institute of Electrical and Electronics Engineers (IEEE); 2014

    Visio-spatial road boundary detection for unmarked urban and rural roads

    Get PDF
    Kuhnl T, Fritsch J. Visio-spatial road boundary detection for unmarked urban and rural roads. In: 2014 IEEE Intelligent Vehicles Symposium Proceedings. Institute of Electrical and Electronics Engineers (IEEE); 2014

    Erkennung von Aggregaten aus Struktur und Handlung

    No full text
    Bauckhage C, Fritsch J, Sagerer G. Erkennung von Aggregaten aus Struktur und Handlung. KĂŒnstliche Intelligenz. 1999;3:4-11.Die Überwachung eines Konstruktionsvorgangs durch ein Bildverarbeitungssystem erfordert entweder die zuverlĂ€ssige Detektion zusammengesetzter Objekte oder die Erkennung von HandlungsablĂ€ufen wĂ€hrend der Konstruktion. Der Artikel beschreibt eine kompakte Methode zur Modellierung aggregierter Objekte, mit deren Hilfe Strukturbeschreibungen von Aggregaten aus Bilddaten erzeugt werden können und die als Basis fĂŒr die Erkennung von HandlungsablĂ€ufen dient. GrundsĂ€tzlich stellen die Detektion von Aggregaten in Einzelbildern und die Erkennung von Handlungssequenzen in Bildfolgen verschiedene Methoden der automatischen Analyse von KonstruktionsvorgĂ€ngen dar, die jedoch zu vergleichbaren Informationen fĂŒhren. Die Integration beider Methoden ermöglicht eine konsistente Interpretation flexibler AggregierungsablĂ€ufe und die Erzeugung von MontageplĂ€nen, so daß die Konstruktion komplexer Objekte gelernt werden kann
    • 

    corecore