2 research outputs found

    Auditory-visual discrimination and identification of lexical tone within and across tone languages

    No full text
    The aim of this research is to investigate the general features of lexical tones that might contribute to their categorisation. Thai tones were presented for (a) discrimination and (b) identification by native Thai and non-native Mandarin tone language participants in auditory-only (AO), visual-only (VO) and auditory-visual (AV) conditions. Discrimination tests revealed: (i) good auditory and auditory-visual discrimination of tone pairs by Thai and Mandarin perceivers, (ii) significant contribution of visual information to tone discrimination in Thai and Mandarin perceivers; (iii) greater AV>AO augmentation at 1500 vs 500 ms interstimulus interval (ISI), showing more use of visual information for tone at phonemic (tonemic) than phonetic (tonetic) levels; and (iv) better overall discrimination – and especially large AV>AO augmentation – of contour-contour than contour-level or level-level tone pairs. Identification tests showed, as expected, that Thai participants were accurate in identifying Thai tones, using both auditory and visual information. Mandarin participants were generally able to categorize the non-native Thai tones into their native tone categories, and also used visual information, especially for contour tones. The discrimination and identification data relationship is discussed as are implications for further studies

    Audiovisual perception of Mandarin lexical tones.

    Get PDF
    It has been widely acknowledged that visual information from a talker’s face, mouth and lip movements plays an important role in speech perception of spoken languages. Visual information facilitates speech perception in audiovisual congruent condition and even alters speech perception in audiovisual incongruent condition. Audiovisual speech perception has been greatly researched in terms of consonants and vowels, and it has been thought that visual information from articulatory movements conveys phonetic information (e.g. place of articulation) that facilitates or changes speech perception. However, some research give rise to another type of visual information which conveys non-phonetic information (e.g. timing cue), affecting speech perception. The existence of these two types of visual information in audiovisual integration process suggests that there are two levels of audiovisual speech integration in different stages of processing. The studies in this dissertation focused on audiovisual perception of Mandarin lexical tones. The results of the experiments which employed behavioural and event-related potential measures provided evidence that visual information has an effect on auditory lexical tone perception. First, lexical tone perception benefits from adding visual information of corresponding articulatory movement. Second, the duration perception of lexical tones is changed by incongruent visual information. Moreover, the studies revealed that there are two types of visual information—timing (non-phonetic) cue and tone duration (phonetic/ tonetic) cue— involving in audiovisual integration process of Mandarin lexical tone. This finding further supports that audiovisual speech perception comprises non-phonetic and phonetic-specific levels of processing. Non-phonetic audiovisual integration could start in an early stage while phonetic-specific audiovisual integration could occur in a later stage of processing. Lexical tones have not been paid much attention in the research of audiovisual speech perception. The current studies fill the gap in the research of Mandarin lexical tone perception, and the findings from these experiments have important theoretical implications for audiovisual speech processing
    corecore