33,411 research outputs found

    Native-language benefit for understanding speech-in-noise: The contribution of semantics

    Get PDF
    Bilinguals are better able to perceive speech-in-noise in their native compared to their non-native language. This benefit is thought to be due to greater use of higher-level, linguistic context in the native language. Previous studies showing this have used sentences and do not allow us to determine which level of language contributes to this context benefit. Here, we used a new paradigm that isolates the semantic level of speech, in both languages of bilinguals. Results revealed that in the native language, a semantically related target word facilitates the perception of a previously presented degraded prime word relative to when a semantically unrelated target follows the prime, suggesting a specific contribution of semantics to the native language context benefit. We also found the reverse in the non-native language, where there was a disadvantage of semantic context on word recognition, suggesting that such top-down, contextual information results in semantic interference in one's second languag

    How visual cues to speech rate influence speech perception

    No full text
    Spoken words are highly variable and therefore listeners interpret speech sounds relative to the surrounding acoustic context, such as the speech rate of a preceding sentence. For instance, a vowel midway between short /ɑ/ and long /a:/ in Dutch is perceived as short /ɑ/ in the context of preceding slow speech, but as long /a:/ if preceded by a fast context. Despite the well-established influence of visual articulatory cues on speech comprehension, it remains unclear whether visual cues to speech rate also influence subsequent spoken word recognition. In two ‘Go Fish’-like experiments, participants were presented with audio-only (auditory speech + fixation cross), visual-only (mute videos of talking head), and audiovisual (speech + videos) context sentences, followed by ambiguous target words containing vowels midway between short /ɑ/ and long /a:/. In Experiment 1, target words were always presented auditorily, without visual articulatory cues. Although the audio-only and audiovisual contexts induced a rate effect (i.e., more long /a:/ responses after fast contexts), the visual-only condition did not. When, in Experiment 2, target words were presented audiovisually, rate effects were observed in all three conditions, including visual-only. This suggests that visual cues to speech rate in a context sentence influence the perception of following visual target cues (e.g., duration of lip aperture), which at an audiovisual integration stage bias participants’ target categorization responses. These findings contribute to a better understanding of how what we see influences what we hear

    Individual variability in the perceptual learning of L2 speech sounds and its cognitive correlates

    Get PDF
    This study explored which cognitive processes are related to individual variability in the learning of novel phonemic contrasts in a second language. 25 English participants were trained to perceive a Korean stop voicing contrast which is novel for English speakers. They were also presented with a large battery of tests which investigated different aspects of their perceptual and cognitive abilities, as well as pre- and posttraining tests of their ability to discriminate this novel consonant contrast. The battery included: adaptive psychoacoustic tasks to determine frequency limens, a paired-association task looking at the ability to memorise the pairing of two items, a backward digit span task measuring working memory span, a sentence perception in noise task that quantifies the effect of top-down information as well as signal detection ability, a sorting task investigating the attentional filtering of the key acoustic features. The general measures that were the most often correlated with the ability to learn the novel phonetic contrast were measures of attentional switching (i.e. the ability to reallocate attention), the ability to sort stimuli according to a particular dimension, which is also somewhat linked to allocation of attention, frequency acuity and the ability to associate two unrelated events

    Investigating The Lexical Support In Non-Native English Speakers Using The Phonemic Restoration Paradigm

    Get PDF
    Samuel and Frost (2015) investigated the differences between native and non-native English speakers’ lexical influence in speech perception. Using the selective adaptation method, the study showed that lexical support was weaker in less language proficient non-native speakers than native speakers; however, lexical support became stronger in more proficient non-native speakers. The present study investigated the lexical support in speech perception between native and non-native English speakers. Unlike the method used by Samuel and Frost (2015), the present study used the phonemic restoration paradigm. The benefit of using this method is to investigate the difference between native and non-native speakers in perceptually restoring missing phonemes. It was hypothesized that native speakers will show a higher phonemic restoration effect than non-native speakers, as well as greater sensitivity to the phoneme position in a word. In the current study, a group of native speakers and a group of non-native speakers participated in a phonemic restoration task. Both groups were presented with four-syllable stimuli words with one phoneme either replaced with white noise (replacement condition), or white noise added on that phoneme (added condition) in either the third syllable or the forth syllable, followed by an intact version of the same word. Participants rated the degradation of the manipulated word compared to its intact version. Results showed that both native and non-native speakers rated the added versions of the word more similar to the intact version than the replaced version. In addition, both native and non-native speakers rated the manipulated (i.e., added or replaced) versions of the word more similar to the intact version when the manipulated phoneme was in the fourth syllable than when the manipulated phoneme was in the third syllable. However, non-native speakers rated the replaced versions of manipulated words as similar to the intact versions as the native English speakers

    The perception of English front vowels by North Holland and Flemish listeners: acoustic similarity predicts and explains cross-linguistic and L2 perception

    Get PDF
    We investigated whether regional differences in the native language (L1) influence the perception of second language (L2) sounds. Many cross-language and L2 perception studies have assumed that the degree of acoustic similarity between L1 and L2 sounds predicts cross-linguistic and L2 performance. The present study tests this assumption by examining the perception of the English contrast between /e{open}/ and /æ/ in native speakers of Dutch spoken in North Holland (the Netherlands) and in East- and West-Flanders (Belgium). A Linear Discriminant Analysis on acoustic data from both dialects showed that their differences in vowel production, as reported in and Adank, van Hout, and Van de Velde (2007), should influence the perception of the L2 vowels if listeners focus on the vowels' acoustic/auditory properties. Indeed, the results of categorization tasks with Dutch or English vowels as response options showed that the two listener groups differed as predicted by the discriminant analysis. Moreover, the results of the English categorization task revealed that both groups of Dutch listeners displayed the asymmetric pattern found in previous word recognition studies, i.e. English /æ/ was more frequently confused with English /e{open}/ than the reverse. This suggests a strong link between previous L2 word learning results and the present L2 perceptual assimilation patterns

    Word contexts enhance the neural representation of individual letters in early visual cortex

    No full text
    Visual context facilitates perception, but how this is neurally implemented remains unclear. One example of contextual facilitation is found in reading, where letters are more easily identified when embedded in a word. Bottom-up models explain this word advantage as a post-perceptual decision bias, while top-down models propose that word contexts enhance perception itself. Here, we arbitrate between these accounts by presenting words and nonwords and probing the representational fidelity of individual letters using functional magnetic resonance imaging. In line with top-down models, we find that word contexts enhance letter representations in early visual cortex. Moreover, we observe increased coupling between letter information in visual cortex and brain activity in key areas of the reading network, suggesting these areas may be the source of the enhancement. Our results provide evidence for top-down representational enhancement in word recognition, demonstrating that word contexts can modulate perceptual processing already at the earliest visual regions

    Variation in the perception of an L2 contrast : a combined phonetic and phonological account

    Get PDF
    The present study argues that variation across listeners in the perception of a non-native contrast is due to two factors: the listener-specic weighting of auditory dimensions and the listener-specic construction of new segmental representations. The interaction of both factors is shown to take place in the perception grammar, which can be modelled within an OT framework. These points are illustrated with the acquisition of the Dutch three-member labiodental contrast [V v f] by German learners of Dutch, focussing on four types of learners from the perception study by Hamann and Sennema (2005a)
    corecore