9 research outputs found
Contribution des indices statistiques au mécanisme de segmentation de la parole en mots (stades précoces et matures du développement du langage)
La parole est un signal continu. Cependant, l auditeur perçoit la parole comme une suite d unités distinctes : les mots. Comment l enfant découvre-t-il la forme sonore des mots de sa langue ? Comment l adulte accède-t-il aux mots dans le flux continu du discours ? L objectif de cette thèse est de contribuer à une meilleure compréhension des mécanismes de la segmentation de la parole en mots. La première partie s intéresse au problème de la segmentation lorsque la parole est considérée comme un signal purement auditif. La seconde partie aborde cette question en tenant compte du fait que la parole est un phénomène audiovisuel. Dans la première partie, nous explorons certaines questions majeures concernant le problème de la segmentation : (1) Comment se fait l initialisation de la segmentation chez le jeune enfant ? Nous apportons des données expérimentales montrant que les enfants de 8 mois apprenant le français, peuvent combiner les probabilités de transition entre les syllabes (TPs) avec un autre indice, le mot familier /mamã/, pour segmenter un langage relativement complexe. Nous proposons que les enfants pourraient initier la segmentation en utilisant de façon concomitante des procédures top-down et bottom-up. (2) L expérience linguistique au stade mature du langage intervient-elle dans le processus de segmentation d un langage nouveau ? Nous fournissons des données expérimentales montrant que les connaissances que les adultes francophones ont des caractéristiques de leur langue maternelle modulent la segmentation, via les TPs, d un langage nouveau. Ces données, en complément de précédents résultats, nous permettent d intégrer les TPs dans un modèle hiérarchique des indices de segmentation. Dans la seconde partie, nous tenons compte de la contribution des informations visuelles à la perception de la parole et posons une nouvelle question: Quel est l apport respectif des indices de segmentation disponibles selon les modalités auditive et visuelle ? Nos résultats suggèrent que lorsque le langage est présenté aux adultes de manière audiovisuelle, le mécanisme de segmentation s appuyant sur le calcul des TPs entre les syllabes opère sur le couple (phonèmes, visèmes) des syllabes comme une unité perceptive. Nos données suggèrent, en outre, que les informations de segmentation disponibles selon les modalités auditive et visuelle considérées indépendamment ne sont pas exploitées.Speech is a continuous signal. Yet the continuous nature of speech hardly seems to pose a problem for everyday listening, as the subjective experience is a string of distinct words. The aim of this thesis is to contribute to a better understanding of speech segmentation mechanisms.The first part of this thesis investigates the segmentation problem when speech is regarded as a purely auditory signal. The second part takes into account the audiovisual nature of speech. In the first part, we explore some major questions related to the segmentation problem: (1) How do infants bootstrap speech segmentation? We bring experimental data showing that French 8-months-old infants can use in combination transitional probabilities computations between syllables (TPs) and the familiar word Mommy, to segment a relatively complex artificial language. We propose that infants can combine top-down and bottom-up strategies to start parsing speech into word constituents. (2) Does prior linguistic knowledge have influence on the segmentation process of a novel language? Our results show that French adults' knowledge of phonotactic regularities in their first language impacts their ability to use TPs to segment a novel language. Our data, in parallel with existing evidence of the relative weight of TPs and other segmentation cues allow us to situate TP cues in a hierarchical framework of speech segmentation cues. In the second part on this thesis, we consider the contribution of visual cues to speech perception, asking a new question: (3) What is the relative contribution of segmentation cues available, in the one hand on the visual modality, in the other hand on the auditory modality. Our data suggest that when adults are presented with audiovisual speech segmentation mechanism based on TP computations operates on the couple (phonemes, lip gestures) of the syllables as a whole perceptual unit. Moreover, TP information, available on the auditory and on the visual modality does not seem to be exploited.BOULOGNE-BU Psych. Henri Pieron (920125201) / SudocSudocFranceF
Blending into the Crowd: Electrophysiological Evidence of Gestalt Perception of a Human Dyad: Extended discussion and theoretical viewpoint
International audienceIn this commentary, we provide further discussion and interpretation of a recent article entitled “Blending into the Crowd: Electrophysiological Evidence of Gestalt Perception of a Human Dyad”, published one year ago by the first author of the present commentary. Firstly, drawing a parallel between the experiment described in the above article and another closely comparable experimental study, we propose that the neural integration process evidenced when seeing two human shapes close in space is a marker of the categorization of a stimulus as a group of humans (two here) represented as an entity per se. We also highlight that the original article provides a new kind of primacy of global visual processing over local elements. Lastly, we suggest that holistic perceptual processing of a dyad and more generally of a group, might guide individual’s actions in response to intentions and behaviors at the group level.The Frequency Tagging paradigm, where different visual stimuli are presented at different frequencies, has primarily been used to investigate low-level visual processing. More recently, studies focused on higherlevel processes, notably, hierarchical visual perceptual organization of complex objects and scenes. In these paradigms, the response at a stimulation frequency constitutes a marker of the stimulus-specific neural response while objective evidence of a perceptual integration of stimuli in a holistic representation, is provided by the response at some intermodulation (IM) components, when such components exist
Suprasegmental Information Affects Processing of Talking Faces at Birth
International audienceFrom birth, newborns show a preference for faces talking a native language compared to silent faces. The present study addresses two questions that remained unanswered by previous research: (a) Does the familiarity with the language play a role in this process and (b) Are all the linguistic and paralinguistic cues necessary in this case? Experiment 1 extended newborns' preference for native speakers to non-native ones. Given that fetuses and newborns are sensitive to the prosodic characteristics of speech, Experiments 2 and 3 presented faces talking native and nonnative languages with the speech stream being low-pass filtered. Results showed that newborns preferred looking at a person who talked to them even when only the prosodic cues were provided for both languages. Nonetheless, a familiarity preference for the previously talking face is observed in the "normal speech" condition (i.e., Experiment 1) and a novelty preference in the "filtered speech" condition (Experiments 2 and 3). This asymmetry reveals that newborns process these two types of stimuli differently and that they may already be sensitive to a mismatch between the articulatory movements of the face and the corresponding speech sounds
Explicit access to phonetic representations in 3-month-old infants
International audienc