2,196 research outputs found

    Coping with speaker-related variation via abstract phonemic categories

    Get PDF
    Listeners can cope with considerable variation in the way that different speakers talk. We argue here that they can do so because of a process of phonological abstraction in the speech-recognition system. We review evidence that listeners adjust the bounds of phonemic categories after only very limited exposure to a deviant realisation of a given phoneme. This learning can be talker-specific and is stable over time; further, the learning generalizes to previously unheard words containing the deviant phoneme. Together these results suggest that the learning involves adjustment of prelexical phonemic representations which mediate between the speech signal and the mental lexicon during word recognition. We argue that such an abstraction process is inconsistent with claims made by some recent models of language processing that the mental lexicon consists solely of multiple detailed traces of acoustic episodes. Simulations with a purely episodic model without functional prelexical abstraction confirm that such a model cannot account for the evidence on lexical generalization of perceptual learning. We conclude that abstract phonemic categories form a necessary part of lexical access, and that the ability to store talker-specific knowledge about those categories provides listeners with the means to deal with cross-talker variation

    Language-universal constraints on the segmentation of English

    Get PDF
    Two word-spotting experiments are reported that examine whether the Possible-Word Constraint (PWC) [1] is a language-specific or language-universal strategy for the segmentation of continuous speech. The PWC disfavours parses which leave an impossible residue between the end of a candidate word and a known boundary. The experiments examined cases where the residue was either a CV syllable with a lax vowel, or a CVC syllable with a schwa. Although neither syllable context is a possible word in English, word-spotting in both contexts was easier than with a context consisting of a single consonant. The PWC appears to be language-universal rather than language-specific

    Positive and negative influences of the lexicon on phonemic decision-making

    No full text
    Lexical knowledge influences how human listeners make decisions about speech sounds. Positive lexical effects (faster responses to target sounds in words than in nonwords) are robust across several laboratory tasks, while negative effects (slower responses to targets in more word-like nonwords than in less word-like nonwords) have been found in phonetic decision tasks but not phoneme monitoring tasks. The present experiments tested whether negative lexical effects are therefore a task-specific consequence of the forced choice required in phonetic decision. We compared phoneme monitoring and phonetic decision performance using the same Dutch materials in each task. In both experiments there were positive lexical effects, but no negative lexical effects. We observe that in all studies showing negative lexical effects, the materials were made by cross-splicing, which meant that they contained perceptual evidence supporting the lexically-consistent phonemes. Lexical knowledge seems to influence phonemic decision-making only when there is evidence for the lexically-consistent phoneme in the speech signal

    When brain regions talk to each other during speech processing, what are they talking about? Commentary on Gow and Olson (2015).

    No full text
    This commentary on Gow and Olson [2015. Sentential influences on acoustic-phonetic processing: A Granger causality analysis of multimodal imaging data. Language, Cognition and Neuroscience. Advance online publication. doi:10.1080/23273798.2015.1029498] questions in three ways their conclusion that speech perception is based on interactive processing. First, it is not clear that the data presented by Gow and Olson reflect normal speech recognition. Second, Gow and Olson's conclusion depends on still-debated assumptions about the functions performed by specific brain regions. Third, the results are compatible with feedforward models of speech perception and appear inconsistent with models in which there are online interactions about phonological content. We suggest that progress in the neuroscience of speech perception requires the generation of testable hypotheses about the function(s) performed by inter-regional connection

    Iron Displacements and Magnetoelastic Coupling in the Spin-Ladder Compound BaFe2Se3

    Full text link
    We report long-range ordered antiferromagnetism concomitant with local iron displacements in the spin-ladder compound BaFe2_2Se3_3. Short-range magnetic correlations, present at room temperature, develop into long-range antiferromagnetic order below TN_N = 256 K, with no superconductivity down to 1.8 K. Built of ferromagnetic Fe4_4 plaquettes, the magnetic ground state correlates with local displacements of the Fe atoms. These iron displacements imply significant magnetoelastic coupling in FeX4_4-based materials, an ingredient hypothesized to be important in the emergence of superconductivity. This result also suggests that knowledge of these local displacements is essential for properly understanding the electronic structure of these systems. As with the copper oxide superconductors two decades ago, our results highlight the importance of reduced dimensionality spin ladder compounds in the study of the coupling of spin, charge, and atom positions in superconducting materials

    Effects of auditory feedback consistency on vowel production

    Get PDF
    In investigations of feedback control during speech production, researchers have focused on two different kinds of responses to erroneous or unexpected auditory feedback. Compensation refers to online, feedback-based corrections of articulations. In contrast, adaptation refers to long-term changes in the speech production system after exposure to erroneous/unexpected feedback, which may last even after feedback is normal again. In the current study, we aimed to compare both types of feedback responses by investigating the conditions under which the system starts adapting in addition to merely compensating. Participants vocalized long vowels while they were exposed to either consistently altered auditory feedback, or to feedback that was unpredictably either altered or normal. Participants were not aware of the manipulation of auditory feedback. We predicted that both conditions would elicit compensation, whereas adaptation would be stronger when the altered feedback was consistent across trials. The results show that although there seems to be somewhat more adaptation for the consistently altered feedback condition, a substantial amount of individual variability led to statistically unreliable effects at the group level. The results stress the importance of taking into account individual differences and show that people vary widely in how they respond to altered auditory feedback
    • …
    corecore