96 research outputs found

    Adaptive Processes in Speech Perception: Contributions from Cerebral and Cerebellar Cortices

    Get PDF
    In the sensorimotor domain, adaptation to distorted sensory input has been well-characterized and is largely attributed to learning mechanisms in the cerebellum that adjust motor output to achieve the same desired sensory outcome. Our interest in the role of the cerebellum in cognitive processes has led us to question whether it also contributes to adaptation in tasks that do not require voluntary motor output. Speech perception is a domain where there exist many examples of adaptation that are guided by both sensory and cognitive processes, without intentional motor involvement. Thus, we investigated behavioral and neural characteristics of speech perception adaptation to spectrally distorted words using a sophisticated noise-vocoded speech manipulation that mimics cochlear implants. We demonstrated that adaptation to spectrally distorted words can be achieved without explicit feedback by eithergradually increasing the severity of the distortion or by using an intermediate distortion during training. We identified regions in both the cerebellar and cerebral cortex that showed differences in neural responses before and after training. In the cerebellum, this included regions in lobes V and VI, and Crus I. In the cerebrum, this included regions in the inferior frontal gyrus, the superior temporal sulcus, and the posterior inferior/middle temporal gyrus. In some of these regions, we further found changes in the magnitude of the neural responses that corresponded to the degree of behavioral improvements in performance. To gain some insight into the nature of the interactions between cerebral and cerebellar cortices and the types of representations involved in speech perception adaptation, we conducted a simple functional connectivity analysis using cerebellar seed regions of interest. We found interactions between the cerebellum and cerebral cortex that were dependent on the location of the cerebellar region. Overall, our behavioral and functional neuroimaging results point to cerebellar involvement in speech perception adaptation, and we conclude with a discussion of the learning mechanisms and neuroanatomical pathways that may support such plasticity

    Comprehension of Morse Code Predicted by Item Recall From Short-Term Memory

    Get PDF
    Published online: Sep 7, 2021Purpose: Morse code as a form of communication became widely used for telegraphy, radio and maritime communication, and military operations, and remains popular with ham radio operators. Some skilled users of Morse code are able to comprehend a full sentence as they listen to it, while others must first transcribe the sentence into its written letter sequence. Morse thus provides an interesting opportunity to examine comprehension differences in the context of skilled acoustic perception. Measures of comprehension and short-term memory show a strong correlation across multiple forms of communication. This study tests whether this relationship holds for Morse and investigates its underlying basis. Our analyses examine Morse and speech immediate serial recall, focusing on established markers of echoic storage, phonological-articulatory coding, and lexicalsemantic support. We show a relationship between Morse short-term memory and Morse comprehension that is not explained by Morse perceptual fluency. In addition, we find that poorer serial recall for Morse compared to speech is primarily due to poorer item memory for Morse, indicating differences in lexical-semantic support. Interestingly, individual differences in speech item memory are also predictive of individual differences in Morse comprehension. Conclusions: We point to a psycholinguistic framework to account for these results, concluding that Morse functions like “reading for the ears” (Maier et al., 2004) and that underlying differences in the integration of phonological and lexical-semantic knowledge impact both short-term memory and comprehension. The results provide insight into individual differences in the comprehension of degraded speech and strategies that build comprehension through listening experience.This work was supported by NIMH Grant RO1-MH59256 JAF). Sara Guediche, now at BCBL, is supported by funding from European Union’s Horizon 2020 Marie Sklodowska-Curie Grant agreement No-79954, the Basque Government through the Basque Excellence Research Centers 2018-2021 program, and the Spanish State Agency Severo Ochoa excellence accreditation SEV-2015-0490 (awarded to the BCBL). Thanks to Marina Kalashnikova and members of the Spoken Language Interest Group for helpful discussions. The authors thank Maryam Khatami, Jody Manners, Corrine Durisko, and Tanisha Hill-Jarrett for assisting with project. We also thank ham radio community, especially Paul Jacob

    Compensatory cross‑modal effects of sentence context on visual word recognition in adults

    Get PDF
    Published online: 11 February 2021Reading involves mapping combinations of a learned visual code (letters) onto meaning. Previous studies have shown that when visual word recognition is challenged by visual degradation, one way to mitigate these negative effects is to provide "top–down" contextual support through a written congruent sentence context. Crowding is a naturally occurring visual phenomenon that impairs object recognition and also affects the recognition of written stimuli during reading. Thus, access to a supporting semantic context via a written text is vulnerable to the detrimental impact of crowding on letters and words. Here, we suggest that an auditory sentence context may provide an alternative source of semantic information that is not influenced by crowding, thus providing “top–down” support cross-modally. The goal of the current study was to investigate whether adult readers can cross-modally compensate for crowding in visual word recognition using an auditory sentence context. The results show a significant cross-modal interaction between the congruency of the auditory sentence context and visual crowding, suggesting that interactions can occur across multiple levels of processing and across different modalities to support reading processes. These findings highlight the need for reading models to specify in greater detail how top–down, cross-modal and interactive mechanisms may allow readers to compensate for deficiencies at early stages of visual processing.This research is supported by the Basque Government through the BERC 2018-2021 program; the Spanish State Research Agency through BCBL Severo Ochoa excellence accreditation (SEV-2015-0490); the "Programa Estatal de Promoción del Talento y su Empleabilidad en I+D+i" fellowship, reference number: PRE2018-083945" to C.C; funding from European Union's Horizon 2020 Marie Sklodowska-Curie grant agreement No-79954 to S.G.; and the grants from the Spanish Ministry of Science and Innovation, Ramon y Cajal-RYC-2015-1735 and Plan Nacional-RTI2018-096242-B-I0 to M.L

    Speech perception under adverse conditions: Insights from behavioral, computational, and neuroscience research

    Get PDF
    Adult speech perception reflects the long-term regularities of the native language, but it is also flexible such that it accommodates and adapts to adverse listening conditions and short-term deviations from native-language norms. The purpose of this article is to examine how the broader neuroscience literature can inform and advance research efforts in understanding the neural basis of flexibility and adaptive plasticity in speech perception. Specifically, we highlight the potential role of learning algorithms that rely on prediction error signals and discuss specific neural structures that are likely to contribute to such learning. To this end, we review behavioral studies, computational accounts, and neuroimaging findings related to adaptive plasticity in speech perception. Already, a few studies have alluded to a potential role of these mechanisms in adaptive plasticity in speech perception. Furthermore, we consider research topics in neuroscience that offer insight into how perception can be adaptively tuned to short-term deviations while balancing the need to maintain stability in the perception of learned long-term regularities. Consideration of the application and limitations of these algorithms in characterizing flexible speech perception under adverse conditions promises to inform theoretical models of speech. © 2014 Guediche, Blumstein, Fiez and Holt

    Semantic priming effects can be modulated by crosslinguistic interactions during second-language auditory word recognition

    Get PDF
    Published online by Cambridge University Press: 24 February 2020The current study investigates how second language auditory word recognition, in early and highly proficient Spanish–Basque (L1-L2) bilinguals, is influenced by crosslinguistic phonological-lexical interactions and semantic priming. Phonological overlap between a word and its translation equivalent (phonological cognate status), and semantic relatedness of a preceding prime were manipulated. Experiment 1 examined word recognition performance in noisy listening conditions that introduce a high degree of uncertainty, whereas Experiment 2 employed clear listening conditions, with low uncertainty. Under noisy listening conditions, semantic priming effects interacted with phonological cognate status: for word recognition accuracy, a related prime overcame inhibitory effects of phonological overlap between target words and their translations. These findings are consistent with models of bilingual word recognition that incorporate crosslinguistic phonological-lexical-semantic interactions. Moreover, they suggest an interplay between L2-L1 interactions and the integration of information across acoustic and semantic levels of processing in flexibly mapping the speech signal onto the spoken words, under adverse listening conditions.This research was funded by the Spanish Ministry of Science and Innovation (Grant PSI2017-82563-P, awarded to A.G.S.), the Netherlands Organization for Scientific research (NWO Veni grant 275-89-027, awarded to M.B.), the Basque Government through the BERC 2018-2021 program, and the Spanish State Agency Severo Ochoa excellence accreditation SEV-2015-0490; Programme for Centres/Units of Excellence (awarded to the BCBL), and the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 799554

    Phonatory and articulatory representations of speech production in cortical and subcortical fMRI responses

    Get PDF
    Speaking involves coordination of multiple neuromotor systems, including respiration, phonation and articulation. Developing non-invasive imaging methods to study how the brain controls these systems is critical for understanding the neurobiology of speech production. Recent models and animal research suggest that regions beyond the primary motor cortex (M1) help orchestrate the neuromotor control needed for speaking, including cortical and sub-cortical regions. Using contrasts between speech conditions with controlled respiratory behavior, this fMRI study investigates articulatory gestures involving the tongue, lips and velum (i.e., alveolars versus bilabials, and nasals versus orals), and phonatory gestures (i.e., voiced versus whispered speech). Multivariate pattern analysis (MVPA) was used to decode articulatory gestures in M1, cerebellum and basal ganglia. Furthermore, apart from confirming the role of a mid-M1 region for phonation, we found that a dorsal M1 region, linked to respiratory control, showed significant differences for voiced compared to whispered speech despite matched lung volume observations. This region was also functionally connected to tongue and lip M1 seed regions, underlying its importance in the coordination of speech. Our study confirms and extends current knowledge regarding the neural mechanisms underlying neuromotor speech control, which hold promise to study neural dysfunctions involved in motor-speech disorders non-invasively.Tis work was supported by the Spanish Ministry of Economy and Competitiveness through the Juan de la Cierva Fellowship (FJCI-2015-26814), and the Ramon y Cajal Fellowship (RYC-2017- 21845), the Spanish State Research Agency through the BCBL “Severo Ochoa” excellence accreditation (SEV-2015-490), the Basque Government (BERC 2018- 2021) and the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant (No 799554).info:eu-repo/semantics/publishedVersio

    Neural substrates of subphonemic variation and lexical competition in spoken word recognition

    Get PDF
    In spoken word recognition, subphonemic variation influences lexical activation, with sounds near a category boundary increasing phonetic competition as well as lexical competition. The current study investigated the interplay of these factors using a visual world task in which participants were instructed to look at a picture of an auditory target (e.g. peacock). Eyetracking data indicated that participants were slowed when a voiced onset competitor (e.g. beaker) was also displayed, and this effect was amplified when acoustic-phonetic competition was increased. Simultaneously-collected fMRI data showed that several brain regions were sensitive to the presence of the onset competitor, including the supramarginal, middle temporal, and inferior frontal gyri, and functional connectivity analyses revealed that the coordinated activity of left frontal regions depends on both acoustic-phonetic and lexical factors. Taken together, results suggest a role for frontal brain structures in resolving lexical competition, particularly as atypical acoustic-phonetic information maps on to the lexicon.Research was supported by National Institutes of Health (NIH) [grant number: R01 DC013064] to EBM and NIH NIDCD [grant number R01 DC006220] to SEB. SG was supported by the Spanish Ministry of Economy and Competitiveness through the Severo Ochoa Programme for Centres/Units of Excellence in R&D [SEV‐2015‐490]. The contents of this paper reflect the views of the authors and not those of the funding agencies

    Written sentence context effects on acoustic-phonetic perception: fMRI reveals cross-modal semantic-perceptual interactions

    Get PDF
    Available online 3 October 2019.This study examines cross-modality effects of a semantically-biased written sentence context on the perception of an acoustically-ambiguous word target identifying neural areas sensitive to interactions between sentential bias and phonetic ambiguity. Of interest is whether the locus or nature of the interactions resembles those previously demonstrated for auditory-only effects. FMRI results show significant interaction effects in right mid-middle temporal gyrus (RmMTG) and bilateral anterior superior temporal gyri (aSTG), regions along the ventral language comprehension stream that map sound onto meaning. These regions are more anterior than those previously identified for auditory-only effects; however, the same cross-over interaction pattern emerged implying similar underlying computations at play. The findings suggest that the mechanisms that integrate information across modality and across sentence and phonetic levels of processing recruit amodal areas where reading and spoken lexical and semantic access converge. Taken together, results support interactive accounts of speech and language processing.This work was supported in part by the National Institutes of Health, NIDCD grant RO1 DC006220

    Brain-behavior relationships in incidental learning of non-native phonetic categories

    Get PDF
    Available online 12 September 2019.Research has implicated the left inferior frontal gyrus (LIFG) in mapping acoustic-phonetic input to sound category representations, both in native speech perception and non-native phonetic category learning. At issue is whether this sensitivity reflects access to phonetic category information per se or to explicit category labels, the latter often being required by experimental procedures. The current study employed an incidental learning paradigm designed to increase sensitivity to a difficult non-native phonetic contrast without inducing explicit awareness of the categorical nature of the stimuli. Functional MRI scans revealed frontal sensitivity to phonetic category structure both before and after learning. Additionally, individuals who succeeded most on the learning task showed the largest increases in frontal recruitment after learning. Overall, results suggest that processing novel phonetic category information entails a reliance on frontal brain regions, even in the absence of explicit category labels.This research was supported by NIH grant R01 DC013064 to EBM and NIH NIDCD Grant R01 DC006220 to SEB. The authors thank F. Sayako Earle for assistance with stimulus development; members of the Language and Brain lab for help with data collection and their feedback throughout the project; Elisa Medeiros for assistance with collection of fMRI data; Paul Taylor for assistance with neuroimaging analyses; and attendees of the 2016 Meeting of the Psychonomic Society and the 2017 Meeting of the Society for Neurobiology of Language for helpful feedback on this project. We also extend thanks to two anonymous reviewers for helpful feedback on a previous version of this manuscript

    Cosavirus, Salivirus and Bufavirus in Diarrheal Tunisian Infants

    Get PDF
    International audienceThree newly discovered viruses have been recently described in diarrheal patients: Cosa-virus (CosV) and Salivirus (SalV), two picornaviruses, and Bufavirus (BuV), a parvovirus. The detection rate and the role of these viruses remain to be established in acute gastroen-teritis (AGE) in diarrheal Tunisian infants. From October 2010 through March 2012, stool samples were collected from 203 children <5 years-old suffering from AGE and attending the Children's Hospital in Monastir, Tunisia. All samples were screened for CosV, SalV and BuV as well as for norovirus (NoV) and group A rotavirus (RVA) by molecular biology. Positive samples for the three screened viruses were also tested for astrovirus, sapovirus, ade-novirus, and Aichi virus, then genotyped when technically feasible. During the study period, 11 (5.4%) samples were positive for one of the three investigated viruses: 2 (1.0%) CosV-A10, 7 (3.5%) SalV-A1 and 2 (1.0%) BuV-1, whereas 71 (35.0%) children were infected with NoV and 50 (24.6%) with RVA. No mixed infections involving the three viruses were found, but multiple infections with up to 4 classic enteric viruses were found in all cases. Although these viruses are suspected to be responsible for AGE in children, our data showed that this association was uncertain since all infected children also presented infections with several enteric viruses, suggesting here potential water-borne transmission. Therefore, further studies with large cohorts of healthy and diarrheal children will be needed to evaluate their clinical role in AGE
    corecore