80 research outputs found

    Electrophysiological and kinematic correlates of communicative intent in the planning and production of pointing gestures and speech

    Get PDF
    Acknowledgements We thank Albert Russel for assistance in setting up the experiments, and Charlotte Paulisse for help in data collection.Peer reviewedPublisher PD

    Unaddressed participants’ gaze in multi-person interaction : Optimizing recipiency

    Get PDF
    One of the most intriguing aspects of human communication is its turn-taking system. It requires the ability to process on-going turns at talk while planning the next, and to launch this next turn without considerable overlap or delay. Recent research has investigated the eye movements of observers of dialogs to gain insight into how we process turns at talk. More specifically, this research has focused on the extent to which we are able to anticipate the end of current and the beginning of next turns. At the same time, there has been a call for shifting experimental paradigms exploring social-cognitive processes away from passive observation toward on-line processing. Here, we present research that responds to this call by situating state-of-the-art technology for tracking interlocutors’ eye movements within spontaneous, face-to-face conversation. Each conversation involved three native speakers of English. The analysis focused on question–response sequences involving just two of those participants, thus rendering the third momentarily unaddressed. Temporal analyses of the unaddressed participants’ gaze shifts from current to next speaker revealed that unaddressed participants are able to anticipate next turns, and moreover, that they often shift their gaze toward the next speaker before the current turn ends. However, an analysis of the complex structure of turns at talk revealed that the planning of these gaze shifts virtually coincides with the points at which the turns first become recognizable as possibly complete. We argue that the timing of these eye movements is governed by an organizational principle whereby unaddressed participants shift their gaze at a point that appears interactionally most optimal: It provides unaddressed participants with access to much of the visual, bodily behavior that accompanies both the current speaker’s and the next speaker’s turn, and it allows them to display recipiency with regard to both speakers’ turns

    Facial signals and social actions in multimodal face-to-face interaction

    Get PDF
    In a conversation, recognising the speaker’s social action (e.g., a request) early may help the potential following speakers understand the intended message quickly, and plan a timely response. Human language is multimodal, and several studies have demonstrated the contribution of the body to communication. However, comparatively few studies have investigated (non-emotional) conversational facial signals and very little is known about how they contribute to the communication of social actions. Therefore, we investigated how facial signals map onto the expressions of two fundamental social actions in conversations: asking questions and providing responses. We studied the distribution and timing of 12 facial signals across 6778 questions and 4553 responses, annotated holistically in a corpus of 34 dyadic face-to-face Dutch conversations. Moreover, we analysed facial signal clustering to find out whether there are specific combinations of facial signals within questions or responses. Results showed a high proportion of facial signals, with a qualitatively different distribution in questions versus responses. Additionally, clusters of facial signals were identified. Most facial signals occurred early in the utterance, and had earlier onsets in questions. Thus, facial signals may critically contribute to the communication of social actions in conversation by providing social action-specific visual information

    Conversational eyebrow frowns facilitate question identification:An online study using virtual avatars

    Get PDF
    Conversation is a time-pressured environment. Recognizing a social action (the ‘‘speech act,’’ such as a question requesting information) early is crucial in conversation to quickly understand the intended message and plan a timely response. Fast turns between interlocutors are especially relevant for responses to questions since a long gap may be meaningful by itself. Human language is multimodal, involving speech as well as visual signals from the body, including the face. But little is known about how conversational facial signals contribute to the communication of social actions. Some of the most prominent facial signals in conversation are eyebrow movements. Previous studies found links between eyebrow movements and questions, suggesting that these facial signals could contribute to the rapid recognition of questions. Therefore, we aimed to investigate whether early eyebrow movements (eyebrow frown or raise vs. no eyebrow movement) facilitate question identification. Participants were instructed to view videos of avatars where the presence of eyebrow movements accompanying questions was manipulated. Their task was to indicate whether the utterance was a question or a statement as accurately and quickly as possible. Data were collected using the online testing platform Gorilla. Results showed higher accuracies and faster response times for questions with eyebrow frowns, suggesting a facilitative role of eyebrow frowns for question identification. This means that facial signals can critically contribute to the communication of social actions in conversation by signaling social action-specific visual information and providing visual cues to speakers’ intentions

    Gaze Direction Signals Response Preference in Conversation

    Get PDF
    In this article, we examine gaze direction in responses to polar questions using both quantitative and conversation analytic (CA) methods. The data come from a novel corpus of conversations in which participants wore eye- tracking glasses to obtain direct measures of their eye movements. The results show that while most preferred responses are produced with gaze toward the questioner, most dispreferred responses are produced with gaze aversion. We further demonstrate that gaze aversion by respondents can occasion self-repair by questioners in the transition space between turns, indicating that the relationship between gaze direction and preference is more than a mere statistical association. We conclude that gaze direction in responses to polar questions functions as a signal of response preference. Data are in American, British, and Canadian English

    A third-person perspective on co-speech action gestures in Parkinson's disease

    Get PDF
    A combination of impaired motor and cognitive function in Parkinson’s disease (PD) can impact on language and communication, with patients exhibiting a particular difficulty processing action verbs. Co-speech gestures embody a link between action and language and contribute significantly to communication in healthy people. Here, we investigated how co-speech gestures depicting actions are affected in PD, in particular with respect to the visual perspective—or the viewpoint – they depict. Gestures are closely related to mental imagery and motor simulations, but people with PD may be impaired in the way they simulate actions from a first-person perspective and may compensate for this by relying more on third-person visual features. We analysed the action-depicting gestures produced by mild-moderate PD patients and age-matched controls on an action description task and examined the relationship between gesture-viewpoint, action-naming, and performance on an action observation task (weight judgement). Healthy controls produced the majority of their action-gestures from a first person perspective, whereas PD patients produced a greater proportion of gestures produced from a third person perspective. We propose that this reflects a compensatory reliance on third-person visual features in the simulation of actions in PD. Performance was also impaired in action-naming and weight judgement, although this was unrelated to gesture viewpoint. Our findings provide a more comprehensive understanding of how action-language impairments in PD impact on action communication, on the cognitive underpinnings of this impairment, as well as elucidating the role of action simulation in gesture production

    Co-speech gestures are a window into the effects of Parkinson’s disease on action representations

    Get PDF
    Parkinson’s disease impairs motor function and cognition, which together affect language and communication. Co-speech gestures are a form of language-related actions that provide imagistic depictions of the speech content they accompany. Gestures rely on visual and motor imagery, but it is unknown whether gesture representations require the involvement of intact neural sensory and motor systems. We tested this hypothesis with a fine-grained analysis of co-speech action gestures in Parkinson’s disease. 37 people with Parkinson’s disease and 33 controls described two scenes featuring actions which varied in their inherent degree of bodily motion. In addition to the perspective of action gestures (gestural viewpoint/first- vs. third-person perspective), we analysed how Parkinson’s patients represent manner (how something/someone moves) and path information (where something/someone moves to) in gesture, depending on the degree of bodily motion involved in the action depicted. We replicated an earlier finding that people with Parkinson’s disease are less likely to gesture about actions from a first-person perspective preferring instead to depict actions gesturally from a third-person perspective – and show that this effect is modulated by the degree of bodily motion in the actions being depicted. When describing high motion actions, the Parkinson’s group were specifically impaired in depicting manner information in gesture and their use of third-person path-only gestures was significantly increased. Gestures about low motion actions were relatively spared. These results inform our understanding of the neural and cognitive basis of gesture production by providing neuropsychological evidence that action gesture production relies on intact motor network function
    • 

    corecore