Interleaved lexical and audiovisual information can retune phoneme boundaries

Abstract

To adapt to situations in which speech perception is difficult, listeners can apply perceptual learning to adjust boundaries between phoneme categories. Such adjustment can draw on contextual, including lexical, information in surrounding speech, or on visual cues via speech- reading. In the present study, listeners proved able to flexibly adjust the boundary between two plosive/stop consonants, /p/-/t/, using both lexical and speech-reading information and given the same experimental design for both cue types. Videos of a speaker pronouncing pseudo-words, and audio recordings of Dutch words, were presented in alternating blocks of either stimulus type, and listeners were able to switch between cues to recalibrate, with effect sizes comparable to results from listeners receiving only a single source of information. Overall, audiovisual cues (i.e., the videos) produced the stronger after-effects, commensurate with their environmental applicability. Lexical cues were nonetheless able to induce retuning effects, despite fewer exposure stimuli and a changing phoneme bias, and despite a design unlike most previous studies of lexically-guided retuning and more typical of audiovisual recalibration studies. Participants who received only audiovisual exposure also showed recalibration effects comparable to previous studies, while a lexical-only group showed lower levels of retuning effects. The presence of the lexical retuning effects nonetheless suggests that lexically-based retuning may be invoked at a faster rate than previously seen. In general, this technique has further illuminated the robustness of adaptability in speech perception, and offers the potential to enable further comparisons across differing forms of perceptual learning

    Similar works

    Full text

    thumbnail-image

    Available Versions

    Last time updated on 11/07/2020