5 research outputs found

    Temporal distribution of information for human consonant recognition in VCV utterances

    No full text
    The temporal distribution of perceptually relevant information for consonant recognition in British English VCVs is investigated. The information distribution in the vicinity of consonantal closure and release was measured by presenting initial and final portions, respectively, of naturally produced VCV utterances to listeners for categorization. A multidimensional scaling analysis of the results provided highly interpretable, four-dimensional geometrical representations of the confusion patterns in the categorization data. In addition, transmitted information as a function of truncation point was calculated for the features manner place and voicing. The effects of speaker, vowel context, stress, and distinctive feature on the resulting information distributions were tested statistically. It was found that, although all factors are significant, the location and spread of the distributions depends principally on the distinctive feature, i.e., the temporal distribution of perceptually relevant information is very different for the features manner, place, and voicing

    L'apport des informations visuelles des gestes oro-faciaux dans le traitement phonologique des phonĂšmes natifs et non-natifs : approches comportementale, neurophysiologique

    Get PDF
    During audiovisual speech perception, like in face-to-face conversations, we can takeadvantage of the visual information conveyed by the speaker's oro-facial gestures. Thisenhances the intelligibility of the utterance. The aim of this work was to determine whetherthis “audiovisual benefit” can improve the identification of phonemes that do not exist in ourmother tongue. Our results revealed that the visual information contributes to overcome thephonological deafness phenomenon we experience in an audio only situation (Study 1). AnERP study indicates that this benefit could be due to the modulation of early processing in theprimary auditory cortex (Study 2). The audiovisual presentation of non native phonemesgenerates a P50 that is not observed for native phonemes. The linguistic background affectsthe way we use visual information. Early bilinguals take less advantage of the visual cuesduring the processing of unfamiliar phonemes (Study 3). We examined the identificationprocesses of native plosive consonants with a gating paradigm to evaluate the differentialcontribution of auditory and visual cues across time (Study 4). We observed that theaudiovisual benefit is not systematic. Phoneme predictability depends on the visual saliencyof the articulatory movements of the speaker.En situation de perception audiovisuelle de la parole, comme lors des conversations face-Ă face,nous pouvons tirer partie des informations visuelles fournies par les mouvements orofaciauxdu locuteur. Ceci amĂ©liore l’intelligibilitĂ© du discours. L'objectif de ce travail Ă©tait dedĂ©terminer si ce « bĂ©nĂ©fice audiovisuel » permet de mieux identifier les phonĂšmes quin’existent pas dans notre langue. Nos rĂ©sultats rĂ©vĂšlent que l’utilisation de l’informationvisuelle permet de surmonter les difficultĂ©s posĂ©es par la surditĂ© phonologique dont noussommes victimes lors d'une prĂ©sentation auditive seule (Etude 1). Une Ă©tude EEG indique quel’apport des informations visuelles au processus d’identification de phonĂšmes non natifspourrait ĂȘtre dĂ» Ă  une modulation prĂ©coce des traitements effectuĂ©s par le cortex auditifprimaire (Etude 2). En prĂ©sentation audiovisuelle les phonĂšmes non natifs donnent lieu Ă  uneP50, ce qui n’est pas observĂ© pour les phonĂšmes natifs. Il semblerait Ă©galement quel'expĂ©rience linguistique affecte l'utilisation des informations visuelles puisque des bilinguesprĂ©coces semblent moins aptes Ă  exploiter ces indices pour distinguer des phonĂšmes qui neleur sont pas familiers (Etude 3). Enfin, l’étude de l’identification de consonnes plosivesnatives avec une tĂąche de dĂ©voilement progressif nous a permis d’évaluer la contributionconjointe et sĂ©parĂ©e des informations auditives et visuelles (Etude 4). Nous avons observĂ© quel’apport de la modalitĂ© visuelle n’est pas systĂ©matique et que la prĂ©dictibilitĂ© de l’identitĂ© duphonĂšme dĂ©pend de la saillance visuelle des mouvements articulatoires du locuteur

    Effects of forensically-relevant facial concealment on acoustic and perceptual properties of consonants

    Get PDF
    This thesis offers a thorough investigation into the effects of forensically-relevant facial concealment on speech acoustics and perception. Specifically, it explores the extent to which selected acoustic-phonetic and auditory-perceptual properties of consonants are affected when the talker is wearing ‘facewear’ while speaking. In this context, the term ‘facewear’ refers to the various types of face-concealing garments and headgear that are worn by people in common daily communication situations; for work and leisure, or as an expression of religious, social and cultural affiliation (e.g. surgical masks, motorcycle helmets, ski and cycling masks, or full-face veils such as the niqāb). It also denotes the face or head coverings that are typically used as deliberate (visual) disguises during the commission of crimes and in situations of public disorder (e.g. balaclavas, hooded sweatshirts, or scarves). The present research centres on the question: does facewear influence the way that consonants are produced, transmitted, and perceived? To examine the effects of facewear on the acoustic speech signal, various intensity, spectral, and temporal properties of spoken English consonants were measured. It was found that facewear can considerably alter the acoustic-phonetic characteristics of consonants. This was likely to be the result of both deliberate and involuntary changes to the talker’s speech productions, and of sound energy absorption by the facewear material. The perceptual consequences of the acoustic modifications to speech were assessed by way of a consonant identification study and a talker discrimination study. The results of these studies showed that auditory-only and auditory-visual consonant intelligibility, as well as the discrimination of unfamiliar talkers, may be greatly compromised when the observer’s judgements are based on ‘facewear speech’. The findings reported in this thesis contribute to our understanding of how auditory and visual information interact during natural speech processing. Furthermore, the results have important practical implications for legal cases in which speech produced through facewear is of pivotal importance. Forensic speech scientists are therefore advised to take the possible effects of facewear on speech into account when interpreting the outcome of their acoustic and auditory analyses of evidential speech recordings, and when evaluating the reliability of earwitness testimony
    corecore