10 research outputs found

    Motor representations underlie the reading of unfamiliar letter combinations

    Get PDF
    Silent reading is a cognitive operation that produces verbal content with no vocal output. One relevant question is the extent to which this verbal content is processed as overt speech in the brain. To address this, we acquired sound, eye trajectories and lips’ dynamics during the reading of consonant-consonant-vowel (CCV) combinations which are infrequent in the language. We found that the duration of the first fixations on the CCVs during silent reading correlate with the duration of the transitions between consonants when the CCVs are actually uttered. With the aid of an articulatory model of the vocal system, we show that transitions measure the articulatory effort required to produce the CCVs. This means that first fixations during silent reading are lengthened when the CCVs require a greater laryngeal and/or articulatory effort to be pronounced. Our results support that a speech motor code is used for the recognition of infrequent text strings during silent reading.Fil: Taitz, Alan. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Física de Buenos Aires. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Física de Buenos Aires; ArgentinaFil: Assaneo, M. Florencia. University of New York; Estados UnidosFil: Shalóm, Diego Edgar. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Física de Buenos Aires. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Física de Buenos Aires; ArgentinaFil: Trevisan, Marcos Alberto. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Física de Buenos Aires. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Física de Buenos Aires; Argentin

    The audiovisual structure of onomatopoeias: An intrusion of real-world physics in lexical creation

    Get PDF
    Sound-symbolic word classes are found in different cultures and languages worldwide. These words are continuously produced to code complex information about events. Here we explore the capacity of creative language to transport complex multisensory information in a controlled experiment, where our participants improvised onomatopoeias from noisy moving objects in audio, visual and audiovisual formats. We found that consonants communicate movement types (slide, hit or ring) mainly through the manner of articulation in the vocal tract. Vowels communicate shapes in visual stimuli (spiky or rounded) and sound frequencies in auditory stimuli through the configuration of the lips and tongue. A machine learning model was trained to classify movement types and used to validate generalizations of our results across formats. We implemented the classifier with a list of cross-linguistic onomatopoeias simple actions were correctly classified, while different aspects were selected to build onomatopoeias of complex actions. These results show how the different aspects of complex sensory information are coded and how they interact in the creation of novel onomatopoeias.Fil: Taitz, Alan. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Física de Buenos Aires. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Física de Buenos Aires; ArgentinaFil: Assaneo, María Florencia. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Física de Buenos Aires. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Física de Buenos Aires; ArgentinaFil: Elisei, Natalia Gabriela. Universidad de Buenos Aires. Facultad de Medicina; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Tripodi, Monica Noemi. Universidad de Buenos Aires; ArgentinaFil: Cohen, Laurent. Centre National de la Recherche Scientifique; Francia. Universite Pierre et Marie Curie; Francia. Institut National de la Santé et de la Recherche Médicale; FranciaFil: Sitt, Jacobo Diego. Centre National de la Recherche Scientifique; Francia. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina. Institut National de la Santé et de la Recherche Médicale; Francia. Universite Pierre et Marie Curie; FranciaFil: Trevisan, Marcos Alberto. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Física de Buenos Aires. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Física de Buenos Aires; Argentin

    Vocal effort modulates the motor planning of short speech structures

    Get PDF
    Speech requires programming the sequence of vocal gestures that produce the sounds of words. Here we explored the timing of this program by asking our participants to pronounce, as quickly as possible, a sequence of consonant-consonant-vowel (CCV) structures appearing on screen. We measured the delay between visual presentation and voice onset. In the case of plosive consonants, produced by sharp and well defined movements of the vocal tract, we found that delays are positively correlated with the duration of the transition between consonants. We then used a battery of statistical tests and mathematical vocal models to show that delays reflect the motor planning of CCVs and transitions are proxy indicators of the vocal effort needed to produce them. These results support that the effort required to produce the sequence of movements of a vocal gesture modulates the onset of the motor plan.Fil: Taitz, Alan. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Física de Buenos Aires. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Física de Buenos Aires; Argentina. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Física; ArgentinaFil: Shalóm, Diego Edgar. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Física de Buenos Aires. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Física de Buenos Aires; Argentina. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Física; ArgentinaFil: Trevisan, Marcos Alberto. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Ciudad Universitaria. Instituto de Física de Buenos Aires. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Instituto de Física de Buenos Aires; Argentina. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Física; Argentin

    Experimental demonstration of a noise-tunable delay line with applications to phase synchronization

    No full text
    In this paper we propose and demonstrate a discrete circuit capable of generating arbitrary time delays dependent on noise, either added externally or already present in the signal of interest due to a finite signal-to-noise ratio. We then go on to demonstrate an application to phase locking of signals by means of a standard Phase-Locked Loop (PLL) design, where the usual Voltage-Controlled Oscillator (VCO) is replaced by the noise-tunable delay line.Fil: Pessacg, Facundo Hugo. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Física; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Taitz, Alan. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Física; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Patterson, Germán Agustín. Instituto Tecnológico de Buenos Aires; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Fierens, Pablo Ignacio. Instituto Tecnológico de Buenos Aires; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Grosz, Diego Fernando. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina. Instituto Tecnológico de Buenos Aires; Argentin

    Decoding performance of the onomatopoeia within sensory modalities and their predictive power across sensory modalities.

    No full text
    <p>We used a machine learning algorithm to evaluate the performance of the novel onomatopoeias at classifying movements, shapes and sound frequencies. <b>a.</b> The system was trained to classify the movement types (hits, slides and rings). Each matrix contains the proportion of onomatopoeias classified in a given movement type (i.e. for all columns in the first row, hit onomatopoeias are respectively classified as hit, slide or ring). High and low decoding performances are yellow and blue respectively. The process is repeated for all modalities and using as features either all phonemes, only the consonants or only the vowels. Consonants produce better performances than vowels in all sensory modalities, with no synergistic effects for consonants and vowels taken together (except for the V condition). <b>b.</b> The system was trained to classify shapes and sound frequencies. Decoding performances are maximized using only the vowels, for all the sensory modalities. In the AV case, shape information is virtually lost (blue upper right and lower left blocks). Decoding performances of rounded shapes producing low frequency sounds and spiky shapes producing high-frequency sounds are enhanced, replicating the phonological link between shapes and sound frequencies already found using ANOVA (<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0193466#pone.0193466.g002" target="_blank">Fig 2B</a>, A and V panels). <b>c.</b> Cross-modality tests. The system was trained with the onomatopoeias of one sensory-modality and tested in each other modality. <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0193466#pone.0193466.s011" target="_blank">S1 Table</a> contains the numerical values of the decoding accuracies.</p

    Participants created onomatopoeias from audio, visual and audiovisual stimuli of moving objects.

    No full text
    <p>Blocks of audio (A), visual (V) and audio-visual (AV) stimuli were constructed using objects of different shape (rounded or spiky) and size (big or small) performing movements (rings, hits or slides) with sounds of two different frequencies (high or low). <b>a.</b> The participants produced onomatopoeic sounds representing stimuli for all combinations of variables for each block. <b>b.</b> The onomatopoeias were transcribed to the International Phonetic Alphabet (IPA). The phonemes were associated with a binary 12-dimensional vector of low-level phonological features. <b>c.</b> Each onomatopoeia was characterized by the matrix of its phonemes in the phonological feature space, which was further averaged across phonemes to a final 12-dimensional vector.</p

    Decoding the movement type of cross-linguistic onomatopoeias.

    No full text
    <p><b>a.</b> We train the LDA model with the AV onomatopoeias and used it as a movement-type classifier (hit, slide or ring) for the cross-linguistic onomatopoeias extracted from Wikipedia [<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0193466#pone.0193466.ref026" target="_blank">26</a>]. The list was restricted to non-human/animal actions with onomatopoeias in 10 languages for each action (balloon bursting, camera shutter, etc). The color code corresponds to the percentage of languages in which onomatopoeias are classified in each movement category. b. The same actions were classified by a group of 20 human raters, showing good agreement with the model-derived predictions.</p

    Bibliography

    No full text
    corecore