118 research outputs found

    Neural pathways for visual speech perception

    Get PDF
    This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA

    Sensorimotor Modulations by Cognitive Processes During Accurate Speech Discrimination: An EEG Investigation of Dorsal Stream Processing

    Get PDF
    Internal models mediate the transmission of information between anterior and posterior regions of the dorsal stream in support of speech perception, though it remains unclear how this mechanism responds to cognitive processes in service of task demands. The purpose of the current study was to identify the influences of attention and working memory on sensorimotor activity across the dorsal stream during speech discrimination, with set size and signal clarity employed to modulate stimulus predictability and the time course of increased task demands, respectively. Independent Component Analysis of 64–channel EEG data identified bilateral sensorimotor mu and auditory alpha components from a cohort of 42 participants, indexing activity from anterior (mu) and posterior (auditory) aspects of the dorsal stream. Time frequency (ERSP) analysis evaluated task-related changes in focal activation patterns with phase coherence measures employed to track patterns of information flow across the dorsal stream. ERSP decomposition of mu clusters revealed event-related desynchronization (ERD) in beta and alpha bands, which were interpreted as evidence of forward (beta) and inverse (alpha) internal modeling across the time course of perception events. Stronger pre-stimulus mu alpha ERD in small set discrimination tasks was interpreted as more efficient attentional allocation due to the reduced sensory search space enabled by predictable stimuli. Mu-alpha and mu-beta ERD in peri- and post-stimulus periods were interpreted within the framework of Analysis by Synthesis as evidence of working memory activity for stimulus processing and maintenance, with weaker activity in degraded conditions suggesting that covert rehearsal mechanisms are sensitive to the quality of the stimulus being retained in working memory. Similar ERSP patterns across conditions despite the differences in stimulus predictability and clarity, suggest that subjects may have adapted to tasks. In light of this, future studies of sensorimotor processing should consider the ecological validity of the tasks employed, as well as the larger cognitive environment in which tasks are performed. The absence of interpretable patterns of mu-auditory coherence modulation across the time course of speech discrimination highlights the need for more sensitive analyses to probe dorsal stream connectivity

    Functional brain correlates of auditory verbal hallucinations in schizophrenia: a design of an fMRI study testing perceptual and cognitive models

    Get PDF
    Treballs Finals del Màster en Ciència Cognitiva i Llenguatge, Facultat de Filosofia, Universitat de Barcelona, Curs: 2020-2021, Tutor: Paola Fuentes Claramonte i Joana Rosselló XimenesAuditory verbal hallucinations (AVHs, or ‘hearing voices’) are a cardinal symptom of schizophrenia, and yet, their biological basis has not been fully determined. As of now, theories that attempt to disentangle the origins of AVHs can be separated into two main types (or models): perceptual versus cognitive. The former has considered AVHs to be due to malfunction in perceptual processing, namely an abnormal activation in the auditory cortex, as well as having a top-down cognitive influence. The latter considers AVHs to be due to one of two cognitive processes: the misinterpretation of intrusive memories (which posit that AVHs are the result of a breakdown in the processes monitoring the source of memories) or to the malfunction of inner speech (which posits that AVHs are due to dysfunction of speech monitoring). The current study aims to propose an adequate experimental design of a prospective fMRI study that will test both the perceptual and cognitive approaches allowing to fill the gaps in the general framework for AVHs. Firstly, to test the perceptual model, an experimental design borrowed from Fuentes-Claramonte and colleagues (2021) will be adapted, with a modification controlling for motor activity. To test one side of the Cognitive Model, the theory of intrusive memory, the experimental paradigm that has been created by Fuentes-Claramonte and colleagues (2019) and validated by Martin-Subero and colleagues (2021) will be adapted to test schizophrenic patients with AVHs, which has not been done before. It will elicit negatively valenced autobiographical memories, which has been shown to activate parts of the default mode network, a circuit thought to be impaired in schizophrenia. Thirdly, to test the other side of the Cognitive Model, namely the theory of inner speech, an experimental paradigm will be proposed called the Rhyming task, a phonological encoding task that is known to activate brain regions involved in subvocal rehearsal and short-term storage of information. However, because the stimuli for this task is lacking for the Spanish population, a pilot study (an online survey) was conducted, presenting healthy participants with pairs of objects (created partly from a personalized corpus), and asked them to do three tasks: to decide whether the names of both objects rhyme; provide the name of each object; and rate the object on a 1 to 5 Likert scale for the purposes of determining emotional valence. The results of the pilot study guided the selection of the appropriate stimuli for the prospective imaging study. The proposed fMRI study tackles the biological basis of AVHs from different perspectives, helping to improve patients' lives that are touched by this cardinal symptom, and thus enabling future research to sculpt appropriate clinical intervention thanks to pinpointing the exact biological basis of hearing voices

    From sequences to cognitive structures : neurocomputational mechanisms

    Get PDF
    Ph. D. Thesis.Understanding how the brain forms representations of structured information distributed in time is a challenging neuroscientific endeavour, necessitating computationally and neurobiologically informed study. Human neuroimaging evidence demonstrates engagement of a fronto-temporal network, including ventrolateral prefrontal cortex (vlPFC), during language comprehension. Corresponding regions are engaged when processing dependencies between word-like items in Artificial Grammar (AG) paradigms. However, the neurocomputations supporting dependency processing and sequential structure-building are poorly understood. This work aimed to clarify these processes in humans, integrating behavioural, electrophysiological and computational evidence. I devised a novel auditory AG task to assess simultaneous learning of dependencies between adjacent and non-adjacent items, incorporating learning aids including prosody, feedback, delineated sequence boundaries, staged pre-exposure, and variable intervening items. Behavioural data obtained in 50 healthy adults revealed strongly bimodal performance despite these cues. Notably, however, reaction times revealed sensitivity to the grammar even in low performers. Behavioural and intracranial electrode data was subsequently obtained in 12 neurosurgical patients performing this task. Despite chance behavioural performance, time- and time-frequency domain electrophysiological analysis revealed selective responsiveness to sequence grammaticality in regions including vlPFC. I developed a novel neurocomputational model (VS-BIND: “Vector-symbolic Sequencing of Binding INstantiating Dependencies”), triangulating evidence to clarify putative mechanisms in the fronto-temporal language network. I then undertook multivariate analyses on the AG task neural data, revealing responses compatible with the presence of ordinal codes in vlPFC, consistent with VS-BIND. I also developed a novel method of causal analysis on multivariate patterns, representational Granger causality, capable of detecting flow of distinct representations within the brain. This alluded to top-down transmission of syntactic predictions during the AG task, from vlPFC to auditory cortex, largely in the opposite direction to stimulus encodings, consistent with predictive coding accounts. It finally suggested roles for the temporoparietal junction and frontal operculum during grammaticality processing, congruent with prior literature. This work provides novel insights into the neurocomputational basis of cognitive structure-building, generating hypotheses for future study, and potentially contributing to AI and translational efforts.Wellcome Trust, European Research Counci

    Error Signals from the Brain: 7th Mismatch Negativity Conference

    Get PDF
    The 7th Mismatch Negativity Conference presents the state of the art in methods, theory, and application (basic and clinical research) of the MMN (and related error signals of the brain). Moreover, there will be two pre-conference workshops: one on the design of MMN studies and the analysis and interpretation of MMN data, and one on the visual MMN (with 20 presentations). There will be more than 40 presentations on hot topics of MMN grouped into thirteen symposia, and about 130 poster presentations. Keynote lectures by Kimmo Alho, Angela D. Friederici, and Israel Nelken will round off the program by covering topics related to and beyond MMN

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Greater Pre-Stimulus Effective Connectivity from the Left Inferior Frontal Area to other Areas is Associated with Better Phonological Decoding in Dyslexic Readers

    Get PDF
    Functional neuroimaging studies suggest that neural networks that subserve reading are organized differently in dyslexic readers (DRs) and typical readers (TRs), yet the hierarchical structure of these networks has not been well studied. We used Granger causality to examine the effective connectivity of the preparatory network that occurs prior to viewing a non-word stimulus that requires phonological decoding in 7 DRs and 10 TRs who were young adults. The neuromagnetic activity that occurred 500 ms prior to each rhyme trial was analyzed from sensors overlying the left and right inferior frontal areas (IFA), temporoparietal areas, and ventral occipital–temporal areas within the low, medium, and high beta and gamma sub-bands. A mixed-model analysis determined whether connectivity to or from the left and right IFAs differed across connectivity direction (into vs. out of the IFAs), brain areas, reading group, and/or performance. Results indicated that greater connectivity in the low beta sub-band from the left IFA to other cortical areas was significantly related to better non-word rhyme discrimination in DRs but not TRs. This suggests that the left IFA is an important cortical area involved in compensating for poor phonological function in DRs. We suggest that the left IFA activates a wider-than usual network prior to each trial in the service of supporting otherwise effortful phonological decoding in DRs. The fact that the left IFA provides top-down activation to both posterior left hemispheres areas used by TRs for phonological decoding and homologous right hemisphere areas is discussed. In contrast, within the high gamma sub-band, better performance was associated with decreased connectivity between the left IFA and other brain areas, in both reading groups. Overly strong gamma connectivity during the pre-stimulus period may interfere with subsequent transient activation and deactivation of sub-networks once the non-word appears

    The Tie between Action and Language Is in Our Imagination

    Get PDF
    In this thesis, the embodied cognition proposal that action words are directly and automatically mapped into the perceiver\u2019s sensorimotor system, and understood via motor simulation, has been put under the lenses of neuropsychology, psychophysics, transcranial magnetic stimulation (TMS), and functional magnetic resonance imaging (fMRI) investigation. The objective was to establish whether the tie between language understanding and motor simulation is necessary for the former to be effective, to the extent that a virtual identity can be recognized between action and language systems..
    corecore