316 research outputs found
Synchronization of Speech and Gesture : Evidence for Interaction in Action
Peer reviewedPostprin
Electrophysiological and kinematic correlates of communicative intent in the planning and production of pointing gestures and speech
Acknowledgements We thank Albert Russel for assistance in setting up the experiments, and Charlotte Paulisse for help in data collection.Peer reviewedPublisher PD
Beat that Word : How Listeners Integrate Beat Gesture and Focus in Multimodal Speech Discourse
Peer reviewedPublisher PD
Body-Specific Motor Imagery of Hand Actions: Neural Evidence from Right- and Left-Handers
If motor imagery uses neural structures involved in action execution, then the neural correlates of imagining an action should differ between individuals who tend to execute the action differently. Here we report fMRI data showing that motor imagery is influenced by the way people habitually perform motor actions with their particular bodies; that is, motor imagery is ‘body-specific’ (Casasanto, 2009). During mental imagery for complex hand actions, activation of cortical areas involved in motor planning and execution was left-lateralized in right-handers but right-lateralized in left-handers. We conclude that motor imagery involves the generation of an action plan that is grounded in the participant's motor habits, not just an abstract representation at the level of the action's goal. People with different patterns of motor experience form correspondingly different neurocognitive representations of imagined actions
In dialogue with an avatar, language behaviour is identical compared to dialogue with a human partner.
The use of virtual reality (VR) as a methodological tool is becoming increasingly popular in behavioral research as its flexibility allows for a wide range of applications. This new method has not been as widely accepted in the field of psycholinguistics, however, possibly due to the assumption that language processing during human-computer interactions does not accurately reflect human-human interactions. Yet at the same time there is a growing need to study human-human language interactions in a tightly controlled context, which has not been possible using existing methods. VR, however, offers experimental control over parameters that cannot be (as finely) controlled in the real world. As such, in this study we aim to show that human-computer language interaction is comparable to human-human language interaction in virtual reality. In the current study we compare participants’ language behavior in a syntactic priming task with human versus computer partners: we used a human partner, a human-like avatar with human-like facial expressions and verbal behavior, and a computer-like avatar which had this humanness removed. As predicted, our study shows comparable priming effects between the human and human-like avatar suggesting that participants attributed human-like agency to the human-like avatar. Indeed, when interacting with the computer-like avatar, the priming effect was significantly decreased. This suggests that when interacting with a human-like avatar, sentence processing is comparable to interacting with a human partner. Our study therefore shows that VR is a valid platform for conducting language research and studying dialogue interactions in an ecologically valid manner
Opposing and following responses in sensorimotor speech control : why responses go both ways
When talking, speakers continuously monitor and use the auditory feedback of their own voice to control and inform speech production processes. When speakers are provided with auditory feedback that is perturbed in real time, most of them compensate for this by opposing the feedback perturbation. But some speakers follow the perturbation. In the current study, we investigated whether the state of the speech production system at perturbation onset may determine what type of response (opposing or following) is given. The results suggest that whether a perturbation-related response is opposing or following depends on ongoing fluctuations of the production system: It initially responds by doing the opposite of what it was doing. This effect and the non-trivial proportion of following responses suggest that current production models are inadequate: They need to account for why responses to unexpected sensory feedback depend on the production-system’s state at the time of perturbation
Recommended from our members
Synthesized size-sound sound symbolism
Studies of sound symbolism have shown that people can
associate sound and meaning in consistent ways when
presented with maximally contrastive stimulus pairs of
nonwords such as bouba/kiki (rounded/sharp) or mil/mal
(small/big). Recent work has shown the effect extends to
antonymic words from natural languages and has proposed a
role for shared cross-modal correspondences in biasing form-
to-meaning associations. An important open question is how
the associations work, and particularly what the role is of
sound-symbolic matches versus mismatches. We report on a
learning task designed to distinguish between three existing
theories by using a spectrum of sound-symbolically matching,
mismatching, and neutral (neither matching nor mismatching)
stimuli. Synthesized stimuli allow us to control for prosody,
and the inclusion of a neutral condition allows a direct test of
competing accounts. We find evidence for a sound-symbolic
match boost, but not for a mismatch difficulty compared to
the neutral condition
- …