3,487 research outputs found
Articulatory consequences of prediction during comprehension
It has been proposed that speech-motor activation observed during comprehension may, in part, reflect involvement of the speech-motor system in the top-down simulation of upcoming material [14]. In the current study we employed an automated approach to the analysis of ultrasound tongue imaging in order to investigate whether comprehension-elicited effects are observable at an articulatory-output level.\ud
We investigated whether and how lexical predictions affect speech-motor output. Effects were found at a relatively early point during the pre-acoustic phase of articulation, and did not appear to be predicated upon the nature of the phonological-overlap between predicted and named items. In these respects effects related to comprehension-elicited predictions appear to differ in nature from those observed in production and perception experiments
Speech monitoring and phonologically-mediated eye gaze in language perception and production: a comparison using printed word eye-tracking
The Perceptual Loop Theory of speech monitoring assumes that speakers routinely inspect their inner speech. In contrast, Huettig and Hartsuiker (2010) observed that listening to one's own speech during language production drives eye-movements to phonologically related printed words with a similar time-course as listening to someone else's speech does in speech perception experiments. This suggests that speakers use their speech perception system to listen to their own overt speech, but not to their inner speech. However, a direct comparison between production and perception with the same stimuli and participants is lacking so far. The current printed word eye-tracking experiment therefore used a within-subjects design, combining production and perception. Displays showed four words, of which one, the target, either had to be named or was presented auditorily. Accompanying words were phonologically related, semantically related, or unrelated to the target. There were small increases in looks to phonological competitors with a similar time-course in both production and perception. Phonological effects in perception however lasted longer and had a much larger magnitude. We conjecture that this difference is related to a difference in predictability of one's own and someone else's speech, which in turn has consequences for lexical competition in other-perception and possibly suppression of activation in self-perception
An integrated theory of language production and comprehension
Currently, production and comprehension are regarded as quite distinct in accounts of language processing. In rejecting this dichotomy, we instead assert that producing and understanding are interwoven, and that this interweaving is what enables people to predict themselves and each other. We start by noting that production and comprehension are forms of action and action perception. We then consider the evidence for interweaving in action, action perception, and joint action, and explain such evidence in terms of prediction. Specifically, we assume that actors construct forward models of their actions before they execute those actions, and that perceivers of others' actions covertly imitate those actions, then construct forward models of those actions. We use these accounts of action, action perception, and joint action to develop accounts of production, comprehension, and interactive language. Importantly, they incorporate well-defined levels of linguistic representation (such as semantics, syntax, and phonology). We show (a) how speakers and comprehenders use covert imitation and forward modeling to make predictions at these levels of representation, (b) how they interweave production and comprehension processes, and (c) how they use these predictions to monitor the upcoming utterances. We show how these accounts explain a range of behavioral and neuroscientific data on language processing and discuss some of the implications of our proposal
How visual cues to speech rate influence speech perception
Spoken words are highly variable and therefore listeners interpret speech sounds relative to the surrounding acoustic context, such as the speech rate of a preceding sentence. For instance, a vowel midway between short /É/ and long /a:/ in Dutch is perceived as short /É/ in the context of preceding slow speech, but as long /a:/ if preceded by a fast context. Despite the well-established influence of visual articulatory cues on speech comprehension, it remains unclear whether visual cues to speech rate also influence subsequent spoken word recognition. In two âGo Fishâ-like experiments, participants were presented with audio-only (auditory speech + fixation cross), visual-only (mute videos of talking head), and audiovisual (speech + videos) context sentences, followed by ambiguous target words containing vowels midway between short /É/ and long /a:/. In Experiment 1, target words were always presented auditorily, without visual articulatory cues. Although the audio-only and audiovisual contexts induced a rate effect (i.e., more long /a:/ responses after fast contexts), the visual-only condition did not. When, in Experiment 2, target words were presented audiovisually, rate effects were observed in all three conditions, including visual-only. This suggests that visual cues to speech rate in a context sentence influence the perception of following visual target cues (e.g., duration of lip aperture), which at an audiovisual integration stage bias participantsâ target categorization responses. These findings contribute to a better understanding of how what we see influences what we hear
Recommended from our members
Analysis of the visual spatiotemporal properties of American Sign Language.
Careful measurements of the temporal dynamics of speech have provided important insights into phonetic properties of spoken languages, which are important for understanding auditory perception. By contrast, analytic quantification of the visual properties of signed languages is still largely uncharted. Exposure to sign language is a unique experience that could shape and modify low-level visual processing for those who use it regularly (i.e., what we refer to as the Enhanced Exposure Hypothesis). The purpose of the current study was to characterize the visual spatiotemporal properties of American Sign Language (ASL) so that future studies can test the enhanced exposure hypothesis in signers, with the prediction that altered vision should be observed within, more so than outside, the range of properties found in ASL. Using an ultrasonic motion tracking system, we recorded the hand position in 3-dimensional space over time during sign language production of signs, sentences, and narratives. From these data, we calculated several metrics: hand position and eccentricity in space and hand motion speed. For individual signs, we also measured total distance travelled by the dominant hand and total duration of each sign. These metrics were found to fall within a selective range, suggesting that exposure to signs is a specific and unique visual experience, which might alter visual perceptual abilities in signers for visual information within the experienced range, even for non-language stimuli
Are simultaneous interpreters expert bilinguals, unique bilinguals, or both?
Simultaneous interpretation is a cognitively demanding process that requires a high level of language management. Previous studies on bilinguals have suggested that extensive practice managing two languages leads to enhancements in cognitive control. Thus, interpreters may be expected to show benefits beyond those seen in bilinguals, either as an extension of previously-seen benefits or in areas specific to interpretation. The present study examined professional interpreters (N = 23) and matched multilinguals (N = 21) on memory tests, the color-word Stroop task, the Attention Network Test, and a non-linguistic task-switching paradigm. The interpreters did not show advantages in conflict resolution or switching cost where bilingual benefits have been noted. However, an interpretation-specific advantage emerged on the mixing cost in the task-switching paradigm. Additionally, the interpreters had larger verbal and spatial memory spans. Interpreters do not continue to garner benefits from bilingualism, but they do appear to possess benefits specific to their experience with simultaneous interpretation
Towards a complete multiple-mechanism account of predictive language processing [Commentary on Pickering & Garrod]
Although we agree with Pickering & Garrod (P&G) that prediction-by-simulation and prediction-by-association are important mechanisms of anticipatory language processing, this commentary suggests that they: (1) overlook other potential mechanisms that might underlie prediction in language processing, (2) overestimate the importance of prediction-by-association in early childhood, and (3) underestimate the complexity and significance of several factors that might mediate prediction during language processing
Word predictability blurs the lines between production and comprehension : Evidence from the production effect in memory
Acknowledgements We are grateful to Faith Tan for data collection and speech onset measurements, and to Opal Harshe for data collection. JR was supported by the Dutch Science Foundation (NWO) grant 275-89-032. NWO played no role in the study design, the collection, analysis and interpretation of data, the writing of the report, or in the decision to submit the article for publication.Peer reviewedPostprin
- âŠ