180 research outputs found
Phonemes and Syllables in Speech Perception: size of the attentional focus in French.
A study by Pitt and Samuel (1990) found that English speakers could narrowly focus attention onto a precise phonemic position inside spoken words [1]. This led the authors to argue that the phoneme, rather than the syllable, is the primary unit of speech perception. Other evidence, obtained with a syllable detection paradigm, has been put forward to propose that the syllable is the unit of perception; yet, these experiments were ran with French speakers [2]. In the present study, we adapted Pitt & Samuel's phoneme detection experiment to French and found that French subjects behave exactly like English subjects: they too can focus attention on a precise phoneme. To explain both this result and the established sensitivity to the syllabic structure, we propose that the perceptual system automatically parses the speech signal into a syllabically-structured phonological representation
Phonological representations and repetition priming
An ubiquitous phenomenon in psychology is the `repetition effect': a repeated stimulus is processed better on the second occurrence than on the first. Yet, what counts as a repetition? When a spoken word is repeated, is it the acoustic shape or the linguistic type that matters? In the present study, we contrasted the contribution of acoustic and phonological features by using participants with different linguistic backgrounds: they came from two populations sharing a common vocabulary (Catalan) yet possessing different phonemic systems. They performed a lexical decision task with lists containing words that were repeated verbatim, as well as words that were repeated with one phonetic feature changed. The feature changes were phonemic, i.e. linguistically relevant, for one population, but not for the other. The results revealed that the repetition effect was modulated by linguistic, not acoustic, similarity: it depended on the subjects' phonemic system
Perceptual adjustment to time-compressed Speech: a cross-linguistic study
revious research has shown that, when hearers listen to artificially speeded speech, their performance improves over the course of 10-15 sentences, as if their perceptual system was "adapting" to these fast rates of speech. In this paper, we further investigate the mechanisms that are responsible for such effects. In Experiment 1, we report that, for bilingual speakers of Catalan and Spanish, exposure to
compressed sentences in either language improves performance on sentences in the other language. Experiment 2 reports that Catalan/Spanish transfer of performance occurs even in monolingual speakers of Spanish who do not understand Catalan. In Experiment 3, we study another pair of languages--namely, English and French--and report no transfer of adaptation between these two languages for English-French bilinguals. Experiment 4, with monolingual English speakers, assesses transfer of adaptation from French, Dutch, and English toward English. Here we find that there is no adaptation from French and intermediate adaptation from Dutch. We discuss the locus of the adaptation to compressed speech and relate our findings to other cross-linguistic studies in speech perception
Improving accuracy and power with transfer learning using a meta-analytic database
Typical cohorts in brain imaging studies are not large enough for systematic
testing of all the information contained in the images. To build testable
working hypotheses, investigators thus rely on analysis of previous work,
sometimes formalized in a so-called meta-analysis. In brain imaging, this
approach underlies the specification of regions of interest (ROIs) that are
usually selected on the basis of the coordinates of previously detected
effects. In this paper, we propose to use a database of images, rather than
coordinates, and frame the problem as transfer learning: learning a
discriminant model on a reference task to apply it to a different but related
new task. To facilitate statistical analysis of small cohorts, we use a sparse
discriminant model that selects predictive voxels on the reference task and
thus provides a principled procedure to define ROIs. The benefits of our
approach are twofold. First it uses the reference database for prediction, i.e.
to provide potential biomarkers in a clinical setting. Second it increases
statistical power on the new task. We demonstrate on a set of 18 pairs of
functional MRI experimental conditions that our approach gives good prediction.
In addition, on a specific transfer situation involving different scanners at
different locations, we show that voxel selection based on transfer learning
leads to higher detection power on small cohorts.Comment: MICCAI, Nice : France (2012
The Influence of Native-Language Phonology on Lexical Access: Exemplar-Based Versus Abstract Lexical Entries
International audienc
Decoding Visual Percepts Induced by Word Reading with fMRI
International audienceWord reading involves multiple cognitive processes. To infer which word is being visualized, the brain first processes the visual percept, deciphers the letters, bigrams, and activates different words based on context or prior expectation like word frequency. In this contribution, we use supervised machine learning techniques to decode the first step of this processing stream using functional Magnetic Resonance Images (fMRI). We build a decoder that predicts the visual percept formed by four letter words, allowing us to identify words that were not present in the training data. To do so, we cast the learning problem as multiple classification problems after describing words with multiple binary attributes. This work goes beyond the identification or reconstruction of single letters or simple geometrical shapes and addresses a challenging estimation problem, that is the prediction of multiple variables from a single observation, hence facing the problem of learning multiple predictors from correlated inputs
Le langage et son acquisition : bases biologiques et psychologiques
Emmanuel Dupoux, directeur d’étudesChristophe Pallier, chargé de recherche au CNRS Ateliers pratiques et théoriques en sciences cognitives Cet enseignement a pour but d’aborder de façon concrète quelques-unes des concepts clefs dans chacune des disciplines fondatrices des sciences cognitives. Les sessions sont organisées sur un mode collaboratif et interactif ; les étudiants sont distribués dans des petits groupes interdisciplinaires et travaillent sur des textes, des données ou des exercices..
Recommended from our members
Modeling Conventionalization and Predictability within MWEs at the Brain Level
While expressions have traditionally been binarized as compositional and noncompositional in linguistic theory, Multiword Expressions (MWEs) demonstrate finer-grained distinctions. Using Association Measures like Pointwise Mutual Information and Dice\u27s Coefficient, MWEs can be characterized as having different degrees of conventionalization and predictability. Our goal is to investigate how these gradiences could reflect cognitive processes. In this study, fMRI recordings of naturalistic narrative comprehension is used to probe to what extent these computational measures and the cognitive processes they could operationalize are observable during on-line sentence processing. Our results show that Dice\u27s Coefficent, representing lexical predictability, is a better predictor of neural activation for processing MWEs. Overall our experimental approach demonstrates how we can test the cognitive plausibility of computational metrics by comparing it against neuroimaging data
Epenthetic vowels in Japanese: A perceptual illusion?
International audienceWe report a set of experiments demonstrating that the number of phonemes perceived in a stimulus depends on the native language of the listener. Comparing French and Japanese subjects we found that the phonotactic properties of the native language can induce subjects to insert "illusory" segments. In Experiment 1, we varied the duration of an inter-consonantal vowel [u] in stimuli such as ebuzo and found that unlike the French, Japanese listeners report that the vowel [u] is present even in stimuli in which the vowel is absent. In Experiment 2 and 3 using an ABX task, we show that Japanese subjects have trouble discriminating stimuli that contain an [u] vowel from stimuli in which the vowel is absent, e.g., (ebuzo vs. ebzo). However, they can easily discriminate items that contain one versus two [u] vowels, e.g., ebuzo vs. ebuuzo, a distinctive contrast in Japanese. Results for French subjects are reversed
- …