Multimodal communication in aphasia: perception and production of co-speech gestures during face-to-face conversation

Abstract

The role of nonverbal communication in patients with post-stroke language impairment (aphasia) is not yet fully understood. This study investigated how aphasic patients perceive and produce co-speech gestures during face-to-face interaction, and whether distinct brain lesions would predict the frequency of spontaneous co-speech gesturing. For this purpose, we recorded samples of conversations in patients with aphasia and healthy participants. Gesture perception was assessed by means of a head-mounted eye-tracking system, and the produced co-speech gestures were coded according to a linguistic classification system. The main results are that meaning-laden gestures (e.g., iconic gestures representing object shapes) are more likely to attract visual attention than meaningless hand movements, and that patients with aphasia are more likely to fixate co-speech gestures overall than healthy participants. This implies that patients with aphasia may benefit from the multimodal information provided by co-speech gestures. On the level of co-speech gesture production, we found that patients with damage to the anterior part of the arcuate fasciculus showed a higher frequency of meaning-laden gestures. This area lies in close vicinity to the premotor cortex and is considered to be important for speech production. This may suggest that the use of meaning-laden gestures depends on the integrity of patients’ speech production abilities

    Similar works