6 research outputs found

    Tutoring in adult-child-interaction: On the loop of the tutor's action modification and the recipient's gaze

    Get PDF
    Pitsch K, Vollmer A-L, Rohlfing K, Fritsch J, Wrede B. Tutoring in adult-child-interaction: On the loop of the tutor's action modification and the recipient's gaze. Interaction Studies. 2014;15(1):55-98.Research of tutoring in parent-infant interaction has shown that tutors - when presenting some action - modify both their verbal and manual performance for the learner (‘motherese’, ‘motionese’). Investigating the sources and effects of the tutors’ action modifications, we suggest an interactional account of ‘motionese’. Using video-data from a semi-experimental study in which parents taught their 8 to 11 month old infants how to nest a set of differently sized cups, we found that the tutors’ action modifications (in particular: high arches) functioned as an orienting device to guide the infant’s visual attention (gaze). Action modification and the recipient’s gaze can be seen to have a reciprocal sequential relationship and to constitute a constant loop of mutual adjustments. Implications are discussed for developmental research and for robotic ‘Social Learning’. We argue that a robot system could use on-line feedback strategies (e.g. gaze) to pro-actively shape a tutor’s action presentation as it emerges

    A computational model of acoustic packaging

    Get PDF
    Schillingmann L. A computational model of acoustic packaging. Bielefeld: Bielefeld University; 2012.Action and language learning in robotics requires flexible methods, since it is not possible to predetermine all possible tasks a robot would be involved in. Future systems need to be able to acquire this knowledge through communication with humans. Children are able to learn new actions although they have limited experience with the events they observe. More specifically, they seem to be able to identify which parts of an action are relevant and adapt this newly-won knowledge to new situations. Typically this does not happen in an isolated way but in an interaction with an adult. In these interactions, multiple modalities are used concurrently and redundantly. Research on child development has shown that the temporal relations of events in the acoustic and visual modality have a significant impact on how this information is processed. Specifically, synchrony between action and language was assumed to be beneficial for finding relevant parts and extracting first knowledge from action demonstrations. This idea has been proposed by Hirsh-Pasek and Golinkoff (1996) as acoustic packaging. They suggest that acoustic information, typically in the form of narration, overlaps with action sequences and provides infants with a bottom-up guide to attend to relevant parts and to find structure within them. The central contribution of this thesis comprises the conception, further development, and implementation of a model that has been inspired by the general idea of acoustic packaging. The resulting model of acoustic packaging is able to segment action demonstrations into multimodal units which are called acoustic packages. These units facilitate measuring the level of structuring in action demonstrations. In addition to action segmentation, the acoustic packaging system is able to flexibly integrate additional sensory cues to acquire first knowledge about the content of action demonstrations. Furthermore, the system was designed to process input online, which enables it to provide feedback to users engaging in an interaction with a robot. The model of acoustic packaging was evaluated on a corpus of adult-adult and adult-child interactions within a cup stacking scenario. The analyses focus on differences between the structure of child-directed and adult-directed interactions as well as developmental trends which are reflected in the statistical properties of acoustic packages. In addition to adult-child interaction, results on a corpus from a similar scenario with a simulated robot are presented. The results indicate that adult-robot interaction exhibits a similar structure compared to adult-child interaction. Furthermore, tests on the iCub robot showed that semantic information on color terms can be extracted from acoustic packages. These results were supported by further analysis of adult-child interactions, which verified that a substantial amount of semantic information can be gathered by exploiting this connection. Envisioning a continuous interaction between a tutor and the learning robot, acoustic packages provide an initial representation of action structure in interaction

    A Computational Model of Acoustic Packaging

    No full text
    Schillingmann L, Wrede B, Rohlfing K. A Computational Model of Acoustic Packaging. IEEE Transactions on Autonomous Mental Development. 2009;1(4):226-237.In order to learn and interact with humans, robots need to understand actions and make use of language in social interactions. The use of language for the learning of actions has been emphasized by Hirsh-Pasek and Golinkoff (MIT Press, 1996), introducing the idea of acoustic packaging. Accordingly, it has been suggested that acoustic information, typically in the form of narration, overlaps with action sequences and provides infants with a bottom-up guide to attend to relevant parts and to find structure within them. In this article, we present a computational model of the multimodal interplay of action and language in tutoring situations. For our purpose, we understand events as temporal intervals, which have to be segmented in both, the visual and the acoustic modality. Our acoustic packaging algorithm merges the segments from both modalities based on temporal overlap. First evaluation results show that acoustic packaging can provide a meaningful segmentation of action demonstration within tutoring behavior. We discuss our findings with regard to a meaningful action segmentation. Based on our future vision of acoustic packaging we point out a roadmap describing the further development of acoustic packaging and interactive scenarios it is employed in

    Towards a Computational Model of Acoustic Packaging

    No full text
    Schillingmann L, Wrede B, Rohlfing K. Towards a Computational Model of Acoustic Packaging. In: International Conference on Development and Learning (ICDL 2009). IEEE; 2009
    corecore