463 research outputs found

    Adjudicating between local and global architectures of predictive processing in the subcortical auditory pathway

    Get PDF
    Predictive processing, a leading theoretical framework for sensory processing, suggests that the brain constantly generates predictions on the sensory world and that perception emerges from the comparison between these predictions and the actual sensory input. This requires two distinct neural elements: generative units, which encode the model of the sensory world; and prediction error units, which compare these predictions against the sensory input. Although predictive processing is generally portrayed as a theory of cerebral cortex function, animal and human studies over the last decade have robustly shown the ubiquitous presence of prediction error responses in several nuclei of the auditory, somatosensory, and visual subcortical pathways. In the auditory modality, prediction error is typically elicited using so-called oddball paradigms, where sequences of repeated pure tones with the same pitch are at unpredictable intervals substituted by a tone of deviant frequency. Repeated sounds become predictable promptly and elicit decreasing prediction error; deviant tones break these predictions and elicit large prediction errors. The simplicity of the rules inducing predictability make oddball paradigms agnostic about the origin of the predictions. Here, we introduce two possible models of the organizational topology of the predictive processing auditory network: (1) the global view, that assumes that predictions on the sensory input are generated at high-order levels of the cerebral cortex and transmitted in a cascade of generative models to the subcortical sensory pathways; and (2) the local view, that assumes that independent local models, computed using local information, are used to perform predictions at each processing stage. In the global view information encoding is optimized globally but biases sensory representations along the entire brain according to the subjective views of the observer. The local view results in a diminished coding efficiency, but guarantees in return a robust encoding of the features of sensory input at each processing stage. Although most experimental results to-date are ambiguous in this respect, recent evidence favors the global model

    Responses in left inferior frontal gyrus are altered for speech‐in‐noise processing, but not for clear speech in autism

    Get PDF
    Introduction: Autistic individuals often have difficulties with recognizing what another person is saying in noisy conditions such as in a crowded classroom or a restaurant. The underlying neural mechanisms of this speech perception difficulty are unclear. In typically developed individuals, three cerebral cortex regions are particularly related to speech-in-noise perception: the left inferior frontal gyrus (IFG), the right insula, and the left inferior parietal lobule (IPL). Here, we tested whether responses in these cerebral cortex regions are altered in speech-in-noise perception in autism.Methods: Seventeen autistic adults and 17 typically developed controls (matched pairwise on age, sex, and IQ) performed an auditory-only speech recognition task during functional magnetic resonance imaging (fMRI). Speech was presented either with noise (noise condition) or without noise (no noise condition, i.e., clear speech).Results: In the left IFG, blood-oxygenation-level-dependent (BOLD) responses were higher in the control compared to the autism group for recognizing speech-in-noise compared to clear speech. For this contrast, both groups had similar response magnitudes in the right insula and left IPL. Additionally, we replicated previous findings that BOLD responses in speech-related and auditory brain regions (including bilateral superior temporal sulcus and Heschl's gyrus) for clear speech were similar in both groups and that voice identity recognition was impaired for clear and noisy speech in autism.Discussion: Our findings show that in autism, the processing of speech is particularly reduced under noisy conditions in the left IFG-a dysfunction that might be important in explaining restricted speech comprehension in noisy environments

    Cross-modal processing of voices and faces in developmental prosopagnosia and developmental phonagnosia

    Get PDF
    Conspecifics can be recognized from either the face or the voice alone. However, person identity information is rarely encountered in purely unimodal situations and there is increasing evidence that the face and voice interact in neurotypical identity processing. Conversely, developmental deficits have been observed that seem to be selective for face and voice recognition, developmental prosopagnosia and developmental phonagnosia, respectively. To date, studies on developmental prosopagnosia and phonagnosia have largely centred on within modality testing. Here, we review evidence from a small number of behavioural and neuroimaging studies which have examined the recognition of both faces and voices in these cohorts. A consensus from the findings is that, when tested in purely unimodal conditions, voice-identity processing appears normal in most cases of developmental prosopagnosia, as does face-identity processing in developmental phonagnosia. However, there is now first evidence that the multisensory nature of person identity impacts on identity recognition abilities in these cohorts. For example, unlike neurotypicals, auditory-only voice recognition is not enhanced in developmental prosopagnosia for voices which have been previously learned together with a face. This might also explain why the recognition of personally familiar voices is poorer in developmental prosopagnosics, compared to controls. In contrast, there is evidence that multisensory interactions might also lead to compensatory mechanisms in these disorders. For example, in developmental phonagnosia, voice recognition may be enhanced if voices have been learned with a corresponding face. Taken together, the reviewed findings challenge traditional models of person recognition which have assumed independence between face-identity and voice-identity processing and rather support an audio-visual model of human communication that assumes direction interactions between voice and face processing streams. In addition, the reviewed findings open up novel empirical research questions and have important implications for potential training regimes for developmental prosopagnosia and phonagnosia

    Dorsal‐movement and ventral‐form regions are functionally connected during visual‐speech recognition

    Get PDF
    Faces convey social information such as emotion and speech. Facial emotion processing is supported via interactions between dorsal‐movement and ventral‐form visual cortex regions. Here, we explored, for the first time, whether similar dorsal–ventral interactions (assessed via functional connectivity), might also exist for visual‐speech processing. We then examined whether altered dorsal–ventral connectivity is observed in adults with high‐functioning autism spectrum disorder (ASD), a disorder associated with impaired visual‐speech recognition. We acquired functional magnetic resonance imaging (fMRI) data with concurrent eye tracking in pairwise matched control and ASD participants. In both groups, dorsal‐movement regions in the visual motion area 5 (V5/MT) and the temporal visual speech area (TVSA) were functionally connected to ventral‐form regions (i.e., the occipital face area [OFA] and the fusiform face area [FFA]) during the recognition of visual speech, in contrast to the recognition of face identity. Notably, parts of this functional connectivity were decreased in the ASD group compared to the controls (i.e., right V5/MT—right OFA, left TVSA—left FFA). The results confirmed our hypothesis that functional connectivity between dorsal‐movement and ventral‐form regions exists during visual‐speech processing. Its partial dysfunction in ASD might contribute to difficulties in the recognition of dynamic face information relevant for successful face‐to‐face communication

    Radial glia in the proliferative ventricular zone of the embryonic and adult turtle, Trachemys scripta elegans.

    Get PDF
    To better understand the role of radial glial (RG) cells in the evolution of the mammalian cerebral cortex, we investigated the role of RG cells in the dorsal cortex and dorsal ventricular ridge of the turtle, Trachemys scripta elegans. Unlike mammals, the glial architecture of adult reptile consists mainly of ependymoradial glia, which share features with mammalian RG cells, and which may contribute to neurogenesis that continues throughout the lifespan of the turtle. To evaluate the morphology and proliferative capacity of ependymoradial glia (here referred to as RG cells) in the dorsal cortex of embryonic and adult turtle, we adapted the cortical electroporation technique, commonly used in rodents, to the turtle telencephalon. Here, we demonstrate the morphological and functional characteristics of RG cells in the developing turtle dorsal cortex. We show that cell division occurs both at the ventricle and away from the ventricle, that RG cells undergo division at the ventricle during neurogenic stages of development, and that mitotic Tbr2+ precursor cells, a hallmark of the mammalian SVZ, are present in the turtle cortex. In the adult turtle, we show that RG cells encompass a morphologically heterogeneous population, particularly in the subpallium where proliferation is most prevalent. One RG subtype is similar to RG cells in the developing mammalian cortex, while 2 other RG subtypes appear to be distinct from those seen in mammal. We propose that the different subtypes of RG cells in the adult turtle perform distinct functions

    Deficits in voice-identity processing: Acquired and developmental phonagnosia

    Get PDF
    The voice contains elementary social communication cues, conveying speech, as well as paralinguistic information pertaining to the emotional state and the identity of the speaker. In contrast to vocal-speech and vocal-emotion processing, voice-identity processing has been less explored. This seems surprising, given the day-to-day significance of person recognition by voice. A valuable approach to unravel how voice-identity processing is accomplished is to investigate people who have a selective deficit in recognising voices. Such a deficit has been termed phonagnosia. In the present chapter, we provide a systematic overview of studies on phonagnosia and how they relate to current neurocognitive models of person recognition. We review studies that have characterised people who suffer from phonagnosia following brain damage (i.e. acquired phonagnosia) and also studies, which have examined phonagnosia cases without apparent brain lesion (i.e. developmental phonagnosia). Based on the reviewed literature, we emphasise the need for a careful behavioural characterisation of phonagnosia cases by taking into consideration the multistage nature of voice-identity processing and the resulting behavioural phonagnosia subtypes

    An information theoretic characterisation of auditory encoding.

    Get PDF
    The entropy metric derived from information theory provides a means to quantify the amount of information transmitted in acoustic streams like speech or music. By systematically varying the entropy of pitch sequences, we sought brain areas where neural activity and energetic demands increase as a function of entropy. Such a relationship is predicted to occur in an efficient encoding mechanism that uses less computational resource when less information is present in the signal: we specifically tested the hypothesis that such a relationship is present in the planum temporale (PT). In two convergent functional MRI studies, we demonstrated this relationship in PT for encoding, while furthermore showing that a distributed fronto-parietal network for retrieval of acoustic information is independent of entropy. The results establish PT as an efficient neural engine that demands less computational resource to encode redundant signals than those with high information content

    Dysfunction of the visual sensory thalamus in developmental dyslexia

    Get PDF
    Developmental dyslexia (DD) is a reading disorder with a prevalence of 5-10%. Neuroscientific research has typically focused on explaining DD symptoms based on pathophysiological changes in the cerebral cortex. However, DD might also be associated with alterations in sensory thalami – central subcortical stations of sensory pathways. A post-mortem study on the visual sensory thalamus (lateral geniculate nucleus, LGN) showed histopathological changes in the magnocellular (M-LGN), but not in the parvocellular (P-LGN), subdivisions. M-LGN and P-LGN have different functional properties and belong to two different visual systems. Whether M-LGN alterations also exist in DD in-vivo is unclear. Also, the potential relevance of M-LGN alterations to DD symptoms is unknown. This lack of knowledge is partly due to considerable technical challenges in investigating LGN subdivisions non-invasively in humans. Here, we employed recent advances in high-field 7 Tesla functional magnetic resonance imaging (fMRI) to map the M- and P-LGN in-vivo in DD adults (n=26) and matched controls (n=28). We show that (i) M-LGN responses differ between DD and control participants, (ii) these differences are more pronounced in male than in female DD participants, and (iii) M-LGN alterations predict a core symptom of DD in male DD participants only, i.e., rapid naming ability. Our results provide a first functional interpretation of M-LGN changes in DD and support DD theories that propose a direct relevance of sensory thalamus alterations for DD symptoms. In addition, the sex-specific behavioral relevance of M-LGN alterations within DD calls for taking sex differences into account when planning brain-based therapeutic interventions

    Abstract rules drive adaptation in the subcortical sensory pathway

    Get PDF
    The subcortical sensory pathways are the fundamental channels for mapping the outside world to our minds. Sensory pathways efficiently transmit information by adapting neural responses to the local statistics of the sensory input. The long-standing mechanistic explanation for this adaptive behaviour is that neural activity decreases with increasing regularities in the local statistics of the stimuli. An alternative account is that neural coding is directly driven by expectations of the sensory input. Here, we used abstract rules to manipulate expectations independently of local stimulus statistics. The ultra-high-field functional-MRI data show that abstract expectations can drive the response amplitude to tones in the human auditory pathway. These results provide first unambiguous evidence of abstract processing in a subcortical sensory pathway. They indicate that the neural representation of the outside world is altered by our prior beliefs even at initial points of the processing hierarchy
    corecore