32 research outputs found

    Juncture prosody across languages : similar production but dissimilar perception

    Get PDF
    How do speakers of languages with different intonation systems produce and perceive prosodic junctures in sentences with identical structural ambiguity? Native speakers of English and of Mandarin produced potentially ambiguous sentences with a prosodic juncture either earlier in the utterance (e.g., “He gave her # dog biscuits,” “他给她 # 狗饼干”), or later (e.g., “He gave her dog # biscuits,” “他给她狗 # 饼干”). These production data showed that prosodic disambiguation is realized very similarly in the two languages, despite some differences in the degree to which individual juncture cues (e.g., pausing) were favoured. In perception experiments with a new disambiguation task, requiring speeded responses to select the correct meaning for structurally ambiguous sentences, language differences in disambiguation response time appeared: Mandarin speakers correctly disambiguated sentences with earlier juncture faster than those with later juncture, while English speakers showed the reverse. Mandarin speakers also showed higher levels of accuracy in disambiguation compared to English speakers, indicating language-specific differences in the extent to which prosodic cues are used. However, Mandarin, but not English, speakers showed a decrease in accuracy when pausing cues were removed. Thus even with high similarity in both structural ambiguity and production cues, prosodic juncture perception across languages can differ

    Asymmetric memory for birth language perception versus production in young international adoptees

    Get PDF
    Adults who as children were adopted into a different linguistic community retain knowledge of their birth language. The possession (without awareness) of such knowledge is known to facilitate the (re)learning of birth-language speech patterns; this perceptual learning predicts such adults' production success as well, indicating that the retained linguistic knowledge is abstract in nature. Adoptees' acquisition of their adopted language is fast and complete; birth-language mastery disappears rapidly, although this latter process has been little studied. Here, 46 international adoptees from China aged four to 10 years, with Dutch as their new language, plus 47 matched non-adopted Dutch-native controls and 40 matched non-adopted Chinese controls, undertook across a two-week period 10 blocks of training in perceptually identifying Chinese speech contrasts (one segmental, one tonal) which were unlike any Dutch contrasts. Chinese controls easily accomplished all these tasks. The same participants also provided speech production data in an imitation task. In perception, adoptees and Dutch controls scored equivalently poorly at the outset of training; with training, the adoptees significantly improved while the Dutch controls did not. In production, adoptees' imitations both before and after training could be better identified, and received higher goodness ratings, than those of Dutch controls. The perception results confirm that birth-language knowledge is stored and can facilitate re-learning in post-adoption childhood; the production results suggest that although processing of phonological category detail appears to depend on access to the stored knowledge, general articulatory dimensions can at this age also still be remembered, and may facilitate spoken imitation

    Keeping Off the Weight with DCs

    Get PDF
    Long studied as modulators of insulin sensitivity, adipose tissue immune cells have recently been implicated in regulating fat mass and weight gain. In this issue of Immunity, Reisner and colleagues (2015) report that ablation of perforin-expressing dendritic cells induces T cell expansion, worsening autoimmunity and surprisingly increasing adiposity

    Learning to perceive non-native tones via distributional training : effects of task and acoustic cue weighting

    Get PDF
    As many distributional learning (DL) studies have shown, adult listeners can achieve discrimination of a difficult non-native contrast after a short repetitive exposure to tokens falling at the extremes of that contrast. Such studies have shown using behavioural methods that a short distributional training can induce perceptual learning of vowel and consonant contrasts. However, much less is known about the neurological correlates of DL, and few studies have examined nonnative lexical tone contrasts. Here, Australian-English speakers underwent DL training on a Mandarin tone contrast using behavioural (discrimination, identification) and neural (oddball-EEG) tasks, with listeners hearing either a bimodal or a unimodal distribution. Behavioural results show that listeners learned to discriminate tones after both unimodal and bimodal training; while EEG responses revealed more learning for listeners exposed to the bimodal distribution. Thus, perceptual learning through exposure to brief sound distributions (a) extends to non-native tonal contrasts, and (b) is sensitive to task, phonetic distance, and acoustic cue-weighting. Our findings have implications for models of how auditory and phonetic constraints influence speech learning

    Lexical stress in English pronunciation

    No full text
    Not all languages have stress and not all languages that do have stress are alike. English is a lexical stress language, which means that in any English word with more than one syllable, the syllables will differ in their relative salience. Some syllables may serve as the locus for prominence-lending accents. Others can never be accented. The stress pattern of an English polysyllabic word is as intrinsic to its phonological identity as the string of segments that make it up. This type of asymmetry across syllables distinguishes stress languages from languages that have no stress in their word phonology. Within stress languages, being a lexical stress language means that stress can vary across syllable positions within words, and in principle can vary contrastively; this distinguishes lexical stress languages from fixed-stress languages where stress is assigned to the same syllable position in any word

    In thrall to the vocabulary

    No full text
    Vocabularies contain hundreds of thousands of words built from only a handful of phonemes; longer words inevitably tend to contain shorter ones. Recognising speech thus requires distinguishing intended words from accidentally present ones. Acoustic information in speech is used wherever it contributes significantly to this process; but as this review shows, its contribution differs across languages, with the consequences of this including: identical and equivalently present information distinguishing the same phonemes being used in Polish but not in German, or in English but not in Italian; identical stress cues being used in Dutch but not in English; expectations about likely embedding patterns differing across English, French, Japanese

    Native Listening: Language Experience and the Recognition of Spoken Words

    No full text
    Readers who reviewed my manuscript remarked that in some ways it is "personal." This is fair: the book recounts the development of psycholinguistic knowledge about how spoken words are recognized over the nearly four decades that this topic has been researched and that makes it a personal story in that those decades are the ones I have spent as a psycholinguist. Inevitably (it seems to me) the book has turned out to center on my own work and that of the many colleagues and graduate students with whom I have been lucky enough to work, because whenever I wanted an example to illustrate a particular iine of research the rich archive of this long list of collaborations usually turned one up

    (Why Psycholinguistics Must be Comparative)

    No full text
    Vreemd genoeg staat voor u een titel deels in het Nederlands terwijl de rede zelf geheel in het Engels wordt gesproken (behalve deze zin dan). The first part of this title was also the title of my inaugural lecture, delivered in this auditorium 16 years ago this month. At the time I had put a lot of effort into composing and delivering this inaugural address in Dutch, my new second language, and I was particularly proud of the title, simply because it could not be translated into English without the loss of a good deal of its meaning (and especially the loss of any associations with eggs which some of you may involuntarily call up). Translating it word by word (‘One-language psychology is no language psychology’) produces a superb example of steenkolenengels which would be quite opaque to English speakers with no knowledge of Germanic languages. All English titles that I could think of seemed pale in comparison to the original; the best was ‘Why psycholinguistics must be comparative’. As a translation this is highly impoverished (it even fails to stress that what we must compare across is languages!). But at least it captures the central argument of the inaugural lecture, that basing psycholinguistic research and theorizing on evidence from only a single language, as has so frequently been done, will often simply lead to a wrong conclusion or to only a partial truth. You can find a version of that argument serving as the introductory chapter in my book Native Listening, which was published just this month. Today I won’t repeat that argument, but I will give you some new examples of the importance of cross-language comparison. These examples are taken from the past 19 years, the years during which I have been fortunate enough to hold the position of director at the mpi and a chair at this university. In fact, it’s easy to make this case even by looking at the simplest examples, i.e., the building blocks of spoken language. My research focuses on listening to spoken language, as the title of the book makes clear. I regard listening to speech as an operation that is continuously influenced by one’s native language. More than that, listening is exquisitely tailored to the native language, which is the main reason why it is so extraordinarily efficient and so wonderfully flexible and adaptable

    Word stress in speech perception

    No full text
    In languages with word-level stress, segmentally matched stressed and unstressed syllables (such as the first syllables of English camper and campaign) differ acoustically, but these suprasegmental acoustic differences are not necessarily exploited in speech perception. If word stress position is fixed, stress can help locate boundaries between words, but can play no role in identifying words (for instance, no minimal pairs such as trusty/trustee could then exist). If word stress placement is variable, it can in principle help in word recognition; but whether or not it is actually used depends on vocabulary structure, in particular on the amount of short-term competition between similar-sounding words. This can vary even between closely related languages in which suprasegmental distinctions have been shown to be equally perceptible. Suprasegmental information is only exploited for word recognition in those languages where using it significantly reduces the amount of lexical competition, and hence noticeably speeds lexical processing

    Bottoms up! How top-down pitfalls ensnare speech perception researchers, too

    No full text
    Not only can the pitfalls that Firestone & Scholl (F&S) identify be generalised across multiple studies within the field of visual perception, but also they have general application outside the field wherever perceptual and cognitive processing are compared. We call attention to the widespread susceptibility of research on the perception of speech to versions of the same pitfalls. Firestone & Scholl (F&S) review an extensive body of research on visual perception. Claims of higher-level effects on lower-level processes, they show, have swept over this research field like a “tidal wave.” Unsurprisingly, other areas of cognitive psychology have been similarly inundated
    corecore