103,986 research outputs found

    Directing human attention with pointing

    Full text link
    © 2014 IEEE. Pointing is a typical means of directing a human's attention to a specific object or event. Robot pointing behaviours that direct the attention of humans are critical for human-robot interaction, communication and collaboration. In this paper, we describe an experiment undertaken to investigate human comprehension of a humanoid robot's pointing behaviour. We programmed a NAO robot to point to markers on a large screen and asked untrained human subjects to identify the target of the robots pointing gesture. We found that humans are able to identify robot pointing gestures. Human subjects achieved higher levels of comprehension when the robot pointed at objects closer to the gesturing arm and when they stood behind the robot. In addition, we found that subjects performance improved with each assessment task. These new results can be used to guide the design of effective robot pointing behaviours that enable more effective robot to human communication and improve human-robot collaborative performance

    Remote presence: supporting deictic gestures through a handheld multi-touch device

    Get PDF
    This thesis argues on the possibility of supporting deictic gestures through handheld multi-touch devices in remote presentation scenarios. In [1], Clark distinguishes indicative techniques of placing-for and directing-to, where placing-for refers to placing a referent into the addressee’s attention, and directing-to refers to directing the addressee’s attention towards a referent. Keynote, PowerPoint, FuzeMeeting and others support placing-for efficiently with slide transitions, and animations, but support limited to none directing-to. The traditional “pointing feature” present in some presentation tools comes as a virtual laser pointer or mouse cursor. [12, 13] have shown that the mouse cursor and laser pointer offer very little informational expressiveness and do not do justice to human communicative gestures. In this project, a prototype application was implemented for the iPad in order to explore, develop, and test the concept of pointing in remote presentations. The prototype offers visualizing and navigating the slides as well as “pointing” and zooming. To further investigate the problem and possible solutions, a theoretical framework was designed representing the relationships between the presenter’s intention and gesture and the resulting visual effect (cursor) that enables the audience members to interpret the meaning of the effect and the presenter’s intention. Two studies were performed to investigate people’s appreciation of different ways of presenting remotely. An initial qualitative study was performed at The Hague, followed by an online quantitative user experiment. The results indicate that subjects found pointing to be helpful in understanding and concentrating, while the detached video feed of the presenter was considered to be distracting. The positive qualities of having the video feed were the emotion and social presence that it adds to the presentations. For a number of subjects, pointing displayed some of the same social and personal qualities [2] that video affords, while less intensified. The combination of pointing and video proved to be successful with 10-out-of-19 subjects scoring it the highest while pointing example came at a close 8-out-of-19. Video was the least preferred with only one subject preferring it. We suggest that the research performed here could provide a basis for future research and possibly be applied in a variety of distributed collaborative settings.Universidade da Madeira - Madeira Interactive Technologies Institut

    Social communication between virtual characters and children with autism

    Get PDF
    Children with ASD have difficulty with social communication, particularly joint attention. Interaction in a virtual environment (VE) may be a means for both understanding these difficulties and addressing them. It is first necessary to discover how this population interacts with virtual characters, and whether they can follow joint attention cues in a VE. This paper describes a study in which 32 children with ASD used the ECHOES VE to assist a virtual character in selecting objects by following the character’s gaze and/or pointing. Both accuracy and reaction time data suggest that children were able to successfully complete the task, and qualitative data further suggests that most children perceived the character as an intentional being with relevant, mutually directed behaviour

    Do the eyes have it? Cues to the direction of social attention

    Get PDF
    The face communicates an impressive amount of visual information. We use it to identify its owner, how they are feeling and to help us understand what they are saying. Models of face processing have considered how we extract such meaning from the face but have ignored another important signal - eye gaze. In this article we begin by reviewing evidence from recent neurophysiological studies that suggests that the eyes constitute a special stimulus in at least two senses. First, the structure of the eyes is such that it provides us with a particularly powerful signal to the direction of another person's gaze, and second, we may have evolved neural mechanisms devoted to gaze processing. As a result, gaze direction is analysed rapidly and automatically, and is able to trigger reflexive shifts of an observer's visual attention. However, understanding where another individual is directing their attention involves more than simply analysing their gaze direction. We go on to describe research with adult participants, children and non-human primates that suggests that other cues such as head orientation and pointing gestures make significant contributions to the computation of another's direction of attention

    Domain general learning: Infants use social and non-social cues when learning object statistics.

    Get PDF
    Previous research has shown that infants can learn from social cues. But is a social cue more effective at directing learning than a non-social cue? This study investigated whether 9-month-old infants (N = 55) could learn a visual statistical regularity in the presence of a distracting visual sequence when attention was directed by either a social cue (a person) or a non-social cue (a rectangle). The results show that both social and non-social cues can guide infants' attention to a visual shape sequence (and away from a distracting sequence). The social cue more effectively directed attention than the non-social cue during the familiarization phase, but the social cue did not result in significantly stronger learning than the non-social cue. The findings suggest that domain general attention mechanisms allow for the comparable learning seen in both conditions

    Jointly structuring triadic spaces of meaning and action:book sharing from 3 months on

    Get PDF
    This study explores the emergence of triadic interactions through the example of book sharing. As part of a naturalistic study, 10 infants were visited in their homes from 3-12 months. We report that (1) book sharing as a form of infant-caregiver-object interaction occurred from as early as 3 months. Using qualitative video analysis at a micro-level adapting methodologies from conversation and interaction analysis, we demonstrate that caregivers and infants practiced book sharing in a highly co-ordinated way, with caregivers carving out interaction units and shaping actions into action arcs and infants actively participating and co-ordinating their attention between mother and object from the beginning. We also (2) sketch a developmental trajectory of book sharing over the first year and show that the quality and dynamics of book sharing interactions underwent considerable change as the ecological situation was transformed in parallel with the infants' development of attention and motor skills. Social book sharing interactions reached an early peak at 6 months with the infants becoming more active in the coordination of attention between caregiver and book. From 7-9 months, the infants shifted their interest largely to solitary object exploration, in parallel with newly emerging postural and object manipulation skills, disrupting the social coordination and the cultural frame of book sharing. In the period from 9-12 months, social book interactions resurfaced, as infants began to effectively integrate object actions within the socially shared activity. In conclusion, to fully understand the development and qualities of triadic cultural activities such as book sharing, we need to look especially at the hitherto overlooked early period from 4-6 months, and investigate how shared spaces of meaning and action are structured together in and through interaction, creating the substrate for continuing cooperation and cultural learning

    Look at Me: Early Gaze Engagement Enhances Corticospinal Excitability During Action Observation

    Get PDF
    Direct gaze is a powerful social cue able to capture the onlooker's attention. Beside gaze, head and limb movements as well can provide relevant sources of information for social interaction. This study investigated the joint role of direct gaze and hand gestures on onlookers corticospinal excitability (CE). In two experiments we manipulated the temporal and spatial aspects of observed gaze and hand behavior to assess their role in affecting motor preparation. To do this, transcranial magnetic stimulation (TMS) on the primary motor cortex (M1) coupled with electromyography (EMG) recording was used in two experiments. In the crucial manipulation, we showed to participants four video clips of an actor who initially displayed eye contact while starting a social request gesture, and then completed the action while directing his gaze toward a salient object for the interaction. This way, the observed gaze potentially expressed the intention to interact. Eye tracking data confirmed that gaze manipulation was effective in drawing observers' attention to the actor's hand gesture. In the attempt to reveal possible time-locked modulations, we tracked CE at the onset and offset of the request gesture. Neurophysiological results showed an early CE modulation when the actor was about to start the request gesture looking straight to the participants, compared to when his gaze was averted from the gesture. This effect was time-locked to the kinematics of the actor's arm movement. Overall, data from the two experiments seem to indicate that the joint contribution of direct gaze and precocious kinematic information, gained while a request gesture is on the verge of beginning, increases the subjective experience of involvement and allows observers to prepare for an appropriate social interaction. On the contrary, the separation of gaze cues and body kinematics can have adverse effects on social motor preparation. CE is highly susceptible to biological cues, such as averted gaze, which is able to automatically capture and divert observer's attention. This point to the existence of heuristics based on early action and gaze cues that would allow observers to interact appropriately

    Ostensive signals support learning from novel attention cues during infancy

    Get PDF
    Social attention cues (e.g., head turning, gaze direction) highlight which events young infants should attend to in a busy environment and, recently, have been shown to shape infants' likelihood of learning about objects and events. Although studies have documented which social cues guide attention and learning during early infancy, few have investigated how infants learn to learn from attention cues. Ostensive signals, such as a face addressing the infant, often precede social attention cues. Therefore, it is possible that infants can use ostensive signals to learn from other novel attention cues. In this training study, 8-month-olds were cued to the location of an event by a novel non-social attention cue (i.e., flashing square) that was preceded by an ostensive signal (i.e., a face addressing the infant). At test, infants predicted the appearance of specific multimodal events cued by the flashing squares, which were previously shown to guide attention to but not inform specific predictions about the multimodal events (Wu and Kirkham, 2010). Importantly, during the generalization phase, the attention cue continued to guide learning of these events in the absence of the ostensive signal. Subsequent experiments showed that learning was less successful when the ostensive signal was absent even if an interesting but non-ostensive social stimulus preceded the same cued events

    Collaborative Gaze Channelling for Improved Cooperation During Robotic Assisted Surgery

    Get PDF
    The use of multiple robots for performing complex tasks is becoming a common practice for many robot applications. When different operators are involved, effective cooperation with anticipated manoeuvres is important for seamless, synergistic control of all the end-effectors. In this paper, the concept of Collaborative Gaze Channelling (CGC) is presented for improved control of surgical robots for a shared task. Through eye tracking, the fixations of each operator are monitored and presented in a shared surgical workspace. CGC permits remote or physically separated collaborators to share their intention by visualising the eye gaze of their counterparts, and thus recovers, to a certain extent, the information of mutual intent that we rely upon in a vis-à-vis working setting. In this study, the efficiency of surgical manipulation with and without CGC for controlling a pair of bimanual surgical robots is evaluated by analysing the level of coordination of two independent operators. Fitts' law is used to compare the quality of movement with or without CGC. A total of 40 subjects have been recruited for this study and the results show that the proposed CGC framework exhibits significant improvement (p<0.05) on all the motion indices used for quality assessment. This study demonstrates that visual guidance is an implicit yet effective way of communication during collaborative tasks for robotic surgery. Detailed experimental validation results demonstrate the potential clinical value of the proposed CGC framework. © 2012 Biomedical Engineering Society.link_to_subscribed_fulltex
    • 

    corecore