58 research outputs found

    Looking for the bouba-kiki effect in prelexical infants

    Get PDF
    Abstract Adults and toddlers systematically associate certain pseudowords, such as 'bouba' and 'kiki', with round and spiky shapes, respectively. The ontological origin of this so-called bouba-kiki effect is unknown: it could be an unlearned aspect of perception, appear with language exposure, or only emerge with the ability to produce speech sounds (i.e., babbling). We report the results of three experiments with five-and six-month-olds that found no bouba-kiki effect at all. We discuss the consequences of these findings for the emergence of cross-modal associations in infant speech perception

    A novel form of perceptual attunement: Context-dependent perception of a native contrast in 14-month-old infants

    Get PDF
    By the end of their first year of life, infants have become experts in discriminating the sounds of their native language, while they have lost the ability to discriminate non-native contrasts. This type of phonetic learning is referred to as perceptual attunement. In the present study, we investigated the emergence of a context-dependent form of perceptual attunement in infancy. Indeed, some native contrasts are not discriminated in certain phonological contexts by adults, due to the presence of a language-specific process that neutralizes the contrasts in those contexts. We used a mismatch design and recorded high-density Electroencephalography (EEG) in French-learning 14-month-olds. Our results show that similarly to French adults, infants fail to discriminate a native voicing contrast (e.g., [f] vs. [v]) when it occurs in a specific phonological context (e.g. [ofbe] vs. [ovbe], no mismatch response), while they successfully detected it in other phonological contexts (e.g., [ofne] vs. [ovne], mismatch response). The present results demonstrate for the first time that by the age of 14 months, infants’ phonetic learning does not only rely on the processing of individual sounds, but also takes into account in a language-specific manner the phonological contexts in which these sounds occur

    L'accès au lexique dans la perception audiovisuelle et visuelle de la parole

    Get PDF
    Seeing the facial gestures of a speaker enhances phonemic identification in noise. The goal of this research was to assess whether this visual information can activate lexical representations. We investigated this question in adults (Experiment 1 to 4) and in children (Experiment 5). First, our results provide evidence indicating that visual information on consonant (Experiment 1) and vowel identity (Experiment 2) contributes to lexical activation processes during word recognition, when the auditory information is deteriorated by noise. Then, we also demonstrated that the mere presentation of the first two phonemes – i.e., the articulatory gestures of the initial syllable– is enough visual information to activate lexical representations and initiate the word recognition process (Experiment 3 and 4). However, our data suggest that visual speech mostly contributes in pre-lexical phonological -rather than lexical- processing in children till the age of 10 (Experiment 5). Key words : speech, visual and audiovisual speech, spoken word recognition, lexical access.En situation de perception audiovisuelle de la parole (i.e., lorsque deux interlocuteurs communiquent face à face) et lorsque le signal acoustique est bruité, l‟intelligibilité des sons produits par un locuteur est augmentée lorsque son visage en mouvement est visible. L‟objectif des travaux présentés ici est de déterminer si cette capacité à « lire sur les lèvres » nous est utile seulement pour augmenter l‟intelligibilité de certains sons de parole (i.e., niveau de traitement pré-lexical) ou également pour accéder au sens des mots (i.e., niveau de traitement lexical). Chez l‟adulte, nos résultats indiquent que l‟information visuelle participe à l‟activation des représentations lexicales en présence d‟une information auditive bruitée (Etude 1 et 2). Voir le geste articulatoire correspondant à la première syllabe d‟un mot constitue une information suffisante pour contacter les représentations lexicales, en l‟absence de toute information auditive (Etude 3 et 4). Les résultats obtenus chez l‟enfant suggèrent néanmoins que jusque l‟âge de 10 ans, l‟information visuelle serait uniquement décodée à un niveau pré-lexical (Etude 5). Mots-clés : parole visuelle et audiovisuelle, reconnaissance de mots parlés, accès au lexique

    Lexical access in audiovisual speech perception

    No full text
    En situation de perception audiovisuelle de la parole (i.e., lorsque deux interlocuteurs communiquent face à face) et lorsque le signal acoustique est bruité, l‟intelligibilité des sons produits par un locuteur est augmentée lorsque son visage en mouvement est visible. L‟objectif des travaux présentés ici est de déterminer si cette capacité à « lire sur les lèvres » nous est utile seulement pour augmenter l‟intelligibilité de certains sons de parole (i.e., niveau de traitement pré-lexical) ou également pour accéder au sens des mots (i.e., niveau de traitement lexical). Chez l‟adulte, nos résultats indiquent que l‟information visuelle participe à l‟activation des représentations lexicales en présence d‟une information auditive bruitée (Etude 1 et 2). Voir le geste articulatoire correspondant à la première syllabe d‟un mot constitue une information suffisante pour contacter les représentations lexicales, en l‟absence de toute information auditive (Etude 3 et 4). Les résultats obtenus chez l‟enfant suggèrent néanmoins que jusque l‟âge de 10 ans, l‟information visuelle serait uniquement décodée à un niveau pré-lexical (Etude 5). Mots-clés : parole visuelle et audiovisuelle, reconnaissance de mots parlés, accès au lexique.Seeing the facial gestures of a speaker enhances phonemic identification in noise. The goal of this research was to assess whether this visual information can activate lexical representations. We investigated this question in adults (Experiment 1 to 4) and in children (Experiment 5). First, our results provide evidence indicating that visual information on consonant (Experiment 1) and vowel identity (Experiment 2) contributes to lexical activation processes during word recognition, when the auditory information is deteriorated by noise. Then, we also demonstrated that the mere presentation of the first two phonemes – i.e., the articulatory gestures of the initial syllable– is enough visual information to activate lexical representations and initiate the word recognition process (Experiment 3 and 4). However, our data suggest that visual speech mostly contributes in pre-lexical phonological -rather than lexical- processing in children till the age of 10 (Experiment 5). Key words : speech, visual and audiovisual speech, spoken word recognition, lexical access

    Resolving the bouba-kiki effect enigma

    No full text

    Resolving the bouba-kiki effect enigma by rooting iconic sound symbolism in physical properties of round and spiky objects

    No full text
    International audienceThe “bouba-kiki effect”, where “bouba” is perceived round and “kiki” spiky, remains a puzzling enigma. We solve it by combining mathematical findings largely unknown in the field, with computational models and novel experimental evidence. We reveal that this effect relies on two acoustic cues: spectral balance and temporal continuity. We demonstrate that it is not speech-specific but rather rooted in physical properties of objects, creating audiovisual regularities in the environment. Round items are mathematically bound to produce, when hitting or rolling on a surface, lower-frequency spectra and more continuous sounds than same-size spiky objects. Finally, we show that adults are sensitive to such regularities. Hence, intuitive physics impacts language perception and possibly language acquisition and evolution too

    Resolving the bouba-kiki effect enigma by rooting iconic sound symbolism in physical properties of round and spiky objects

    No full text
    International audienceThe “bouba-kiki effect”, where “bouba” is perceived round and “kiki” spiky, remains a puzzling enigma. We solve it by combining mathematical findings largely unknown in the field, with computational models and novel experimental evidence. We reveal that this effect relies on two acoustic cues: spectral balance and temporal continuity. We demonstrate that it is not speech-specific but rather rooted in physical properties of objects, creating audiovisual regularities in the environment. Round items are mathematically bound to produce, when hitting or rolling on a surface, lower-frequency spectra and more continuous sounds than same-size spiky objects. Finally, we show that adults are sensitive to such regularities. Hence, intuitive physics impacts language perception and possibly language acquisition and evolution too

    Monolingual and bilingual infants' attention to talking faces: evidence from eye-tracking and Bayesian modeling

    No full text
    IntroductionA substantial amount of research from the last two decades suggests that infants' attention to the eyes and mouth regions of talking faces could be a supporting mechanism by which they acquire their native(s) language(s). Importantly, attentional strategies seem to be sensitive to three types of constraints: the properties of the stimulus, the infants' attentional control skills (which improve with age and brain maturation) and their previous linguistic and non-linguistic knowledge. The goal of the present paper is to present a probabilistic model to simulate infants' visual attention control to talking faces as a function of their language learning environment (monolingual vs. bilingual), attention maturation (i.e., age) and their increasing knowledge concerning the task at stake (detecting and learning to anticipate information displayed in the eyes or the mouth region of the speaker).MethodsTo test the model, we first considered experimental eye-tracking data from monolingual and bilingual infants (aged between 12 and 18 months; in part already published) exploring a face speaking in their native language. In each of these conditions, we compared the proportion of total looking time on each of the two areas of interest (eyes vs. mouth of the speaker).ResultsIn line with previous studies, our experimental results show a strong bias for the mouth (over the eyes) region of the speaker, regardless of age. Furthermore, monolingual and bilingual infants appear to have different developmental trajectories, which is consistent with and extends previous results observed in the first year. Comparison of model simulations with experimental data shows that the model successfully captures patterns of visuo-attentional orientation through the three parameters that effectively modulate the simulated visuo-attentional behavior.DiscussionWe interpret parameter values, and find that they adequately reflect evolution of strength and speed of anticipatory learning; we further discuss their descriptive and explanatory power

    L'accès au lexique dans la perception audiovisuelle et visuelle de la parole

    No full text
    En situation de perception audiovisuelle de la parole (i.e., lorsque deux interlocuteurs communiquent face à face) et lorsque le signal acoustique est bruité, l intelligibilité des sons produits par un locuteur est augmentée lorsque son visage en mouvement est visible. L objectif des travaux présentés ici est de déterminer si cette capacité à lire sur les lèvres nous est utile seulement pour augmenter l intelligibilité de certains sons de parole (i.e., niveau de traitement pré-lexical) ou également pour accéder au sens des mots (i.e., niveau de traitement lexical). Chez l adulte, nos résultats indiquent que l information visuelle participe à l activation des représentations lexicales en présence d une information auditive bruitée (Etude 1 et 2). Voir le geste articulatoire correspondant à la première syllabe d un mot constitue une information suffisante pour contacter les représentations lexicales, en l absence de toute information auditive (Etude 3 et 4). Les résultats obtenus chez l enfant suggèrent néanmoins que jusque l âge de 10 ans, l information visuelle serait uniquement décodée à un niveau pré-lexical (Etude 5). Mots-clés : parole visuelle et audiovisuelle, reconnaissance de mots parlés, accès au lexique.Seeing the facial gestures of a speaker enhances phonemic identification in noise. The goal of this research was to assess whether this visual information can activate lexical representations. We investigated this question in adults (Experiment 1 to 4) and in children (Experiment 5). First, our results provide evidence indicating that visual information on consonant (Experiment 1) and vowel identity (Experiment 2) contributes to lexical activation processes during word recognition, when the auditory information is deteriorated by noise. Then, we also demonstrated that the mere presentation of the first two phonemes i.e., the articulatory gestures of the initial syllable is enough visual information to activate lexical representations and initiate the word recognition process (Experiment 3 and 4). However, our data suggest that visual speech mostly contributes in pre-lexical phonological -rather than lexical- processing in children till the age of 10 (Experiment 5). Key words : speech, visual and audiovisual speech, spoken word recognition, lexical access.SAVOIE-SCD - Bib.électronique (730659901) / SudocGRENOBLE1/INP-Bib.électronique (384210012) / SudocGRENOBLE2/3-Bib.électronique (384219901) / SudocSudocFranceF
    • …
    corecore