15 research outputs found

    A voice without a mouth no more: The neurobiology of language and consciousness

    Get PDF
    Most research on the neurobiology of language ignores consciousness and vice versa. Here, language, with an emphasis on inner speech, is hypothesised to generate and sustain self-awareness, i.e., higher-order consciousness. Converging evidence supporting this hypothesis is reviewed. To account for these findings, a 'HOLISTIC' model of neurobiology of language, inner speech, and consciousness is proposed. It involves a 'core' set of inner speech production regions that initiate the experience of feeling and hearing words. These take on affective qualities, deriving from activation of associated sensory, motor, and emotional representations, involving a largely unconscious dynamic 'periphery', distributed throughout the whole brain. Responding to those words forms the basis for sustained network activity, involving 'default mode' activation and prefrontal and thalamic/brainstem selection of contextually relevant responses. Evidence for the model is reviewed, supporting neuroimaging meta-analyses conducted, and comparisons with other theories of consciousness made. The HOLISTIC model constitutes a more parsimonious and complete account of the 'neural correlates of consciousness' that has implications for a mechanistic account of mental health and wellbeing

    The interrelationship between the face and vocal tract configuration during audiovisual speech

    Get PDF
    It is well established that speech perception is improved when we are able to see the speaker talking along with hearing their voice, especially when the speech is noisy. While we have a good understanding of where speech integration occurs in the brain, it is unclear how visual and auditory cues are combined to improve speech perception. One suggestion is that integration can occur as both visual and auditory cues arise from a common generator: the vocal tract. Here, we investigate whether facial and vocal tract movements are linked during speech production by comparing videos of the face and fast magnetic resonance (MR) image sequences of the vocal tract. The joint variation in the face and vocal tract was extracted using an application of principal components analysis (PCA), and we demonstrate that MR image sequences can be reconstructed with high fidelity using only the facial video and PCA. Reconstruction fidelity was significantly higher when images from the two sequences corresponded in time, and including implicit temporal information by combining contiguous frames also led to a significant increase in fidelity. A “Bubbles” technique was used to identify which areas of the face were important for recovering information about the vocal tract, and vice versa, on a frame-by-frame basis. Our data reveal that there is sufficient information in the face to recover vocal tract shape during speech. In addition, the facial and vocal tract regions that are important for reconstruction are those that are used to generate the acoustic speech signal

    Evidence of Human-Like Visual-Linguistic Integration in Multimodal Large Language Models During Predictive Language Processing

    Full text link
    The advanced language processing abilities of large language models (LLMs) have stimulated debate over their capacity to replicate human-like cognitive processes. One differentiating factor between language processing in LLMs and humans is that language input is often grounded in several perceptual modalities, whereas most LLMs process solely text-based information. Multimodal grounding allows humans to integrate - e.g. visual context with linguistic information and thereby place constraints on the space of upcoming words, reducing cognitive load and improving comprehension. Recent multimodal LLMs (mLLMs) combine a visual-linguistic embedding space with a transformer type attention mechanism for next-word prediction. Here we ask whether predictive language processing based on multimodal input in mLLMs aligns with humans. Two-hundred participants watched short audio-visual clips and estimated predictability of an upcoming verb or noun. The same clips were processed by the mLLM CLIP, with predictability scores based on comparing image and text feature vectors. Eye-tracking was used to estimate what visual features participants attended to, and CLIP's visual attention weights were recorded. We find that alignment of predictability scores was driven by multimodality of CLIP (no alignment for a unimodal state-of-the-art LLM) and by the attention mechanism (no alignment when attention weights were perturbated or when the same input was fed to a multimodal model without attention). We further find a significant spatial overlap between CLIP's visual attention weights and human eye-tracking data. Results suggest that comparable processes of integrating multimodal information, guided by attention to relevant visual features, supports predictive language processing in mLLMs and humans.Comment: 13 pages, 4 figures, submitted to journa

    The intimacy of psychedelics, language, and consciousness. An interview with Jeremy I. Skipper by Leor Roseman

    Full text link
    editorial reviewedThis interview explores the intimate relationship between language and consciousness, drawing insights from aphasia phenomenology, psychedelic experiences, and neuroscientific theories. Jeremy I. Skipper, a cognitive neuroscientist, argues that language is not merely a tool for reporting conscious experiences but plays a generative role in shaping and sustaining consciousness itself. He critiques localizationist models of language processing, emphasizing the context-dependence and dynamic recruitment of brain regions. Parallels are drawn between the experiences of aphasic patients, who report a loss of self-narrative and increased connectedness, and the phenomenology of psychedelic states, which often involve a dissolution of linguistic categories and a sense of ineffability. Skipper outlines potential neural mechanisms linking language disruption to psychedelic experiences and discusses the UNITy Project, aimed in part at studying post-acute meaning-making processes and predicting changes in language and well-being after psychedelic sessions

    Effect of remote ischaemic conditioning on clinical outcomes in patients with acute myocardial infarction (CONDI-2/ERIC-PPCI): a single-blind randomised controlled trial.

    Get PDF
    BACKGROUND: Remote ischaemic conditioning with transient ischaemia and reperfusion applied to the arm has been shown to reduce myocardial infarct size in patients with ST-elevation myocardial infarction (STEMI) undergoing primary percutaneous coronary intervention (PPCI). We investigated whether remote ischaemic conditioning could reduce the incidence of cardiac death and hospitalisation for heart failure at 12 months. METHODS: We did an international investigator-initiated, prospective, single-blind, randomised controlled trial (CONDI-2/ERIC-PPCI) at 33 centres across the UK, Denmark, Spain, and Serbia. Patients (age >18 years) with suspected STEMI and who were eligible for PPCI were randomly allocated (1:1, stratified by centre with a permuted block method) to receive standard treatment (including a sham simulated remote ischaemic conditioning intervention at UK sites only) or remote ischaemic conditioning treatment (intermittent ischaemia and reperfusion applied to the arm through four cycles of 5-min inflation and 5-min deflation of an automated cuff device) before PPCI. Investigators responsible for data collection and outcome assessment were masked to treatment allocation. The primary combined endpoint was cardiac death or hospitalisation for heart failure at 12 months in the intention-to-treat population. This trial is registered with ClinicalTrials.gov (NCT02342522) and is completed. FINDINGS: Between Nov 6, 2013, and March 31, 2018, 5401 patients were randomly allocated to either the control group (n=2701) or the remote ischaemic conditioning group (n=2700). After exclusion of patients upon hospital arrival or loss to follow-up, 2569 patients in the control group and 2546 in the intervention group were included in the intention-to-treat analysis. At 12 months post-PPCI, the Kaplan-Meier-estimated frequencies of cardiac death or hospitalisation for heart failure (the primary endpoint) were 220 (8·6%) patients in the control group and 239 (9·4%) in the remote ischaemic conditioning group (hazard ratio 1·10 [95% CI 0·91-1·32], p=0·32 for intervention versus control). No important unexpected adverse events or side effects of remote ischaemic conditioning were observed. INTERPRETATION: Remote ischaemic conditioning does not improve clinical outcomes (cardiac death or hospitalisation for heart failure) at 12 months in patients with STEMI undergoing PPCI. FUNDING: British Heart Foundation, University College London Hospitals/University College London Biomedical Research Centre, Danish Innovation Foundation, Novo Nordisk Foundation, TrygFonden

    The hearing ear is always found close to the speaking tongue: Review of the role of the motor system in speech perception

    Get PDF
    Does "the motor system" play "a role" in speech perception? If so, where, how, and when? We conducted a systematic review that addresses these questions using both qualitative and quantitative methods. The qualitative review of behavioural, computational modelling, non-human animal, brain damage/disorder, electrical stimulation/recording, and neuroimaging research suggests that distributed brain regions involved in producing speech play specific, dynamic, and contextually determined roles in speech perception. The quantitative review employed region and network based neuroimaging meta-analyses and a novel text mining method to describe relative contributions of nodes in distributed brain networks. Supporting the qualitative review, results show a specific functional correspondence between regions involved in non-linguistic movement of the articulators, covertly and overtly producing speech, and the perception of both nonword and word sounds. This distributed set of cortical and subcortical speech production regions are ubiquitously active and form multiple networks whose topologies dynamically change with listening context. Results are inconsistent with motor and acoustic only models of speech perception and classical and contemporary dual-stream models of the organization of language and the brain. Instead, results are more consistent with complex network models in which multiple speech production related networks and subnetworks dynamically self-organize to constrain interpretation of indeterminant acoustic patterns as listening context requires

    The Magic, Memory, and Curiosity fMRI Dataset of People Viewing Magic Tricks

    No full text
    Videos of magic tricks offer lots of opportunities to study the human mind. They violate the expectations of the viewer (i.e., causing strong prediction errors), misdirect attention, and elicit a variety of epistemic emotions such as surprise and curiosity. Herein we describe and share the Magic, Memory, and Curiosity (MMC) dataset where 50 participants watched 36 magic tricks which were filmed and edited specifically for functional magnetic imaging (fMRI) experiments. The MMC dataset includes contextual incentive manipulation, curiosity ratings for the magic tricks, as well as incidental memory performance tested a week later. fMRI data were acquired before, during, and after learning. We show that both behavioural and fMRI data are of high quality, as indicated by basic validation analysis, i.e., variance decomposition as well as intersubject correlation and seed-based functional connectivity, respectively. The richness and complexity of the MMC dataset will allow researchers to explore dynamic cognitive and motivational processes from various angles

    Memories of Hand Movements are Tied to Speech Through Learning

    No full text
    Hand movements frequently occur with speech in the context of co-speech gestures. The extent to which the memories that guide co-speech hand movements are tied to the speech they occur with is unclear. To test this, we paired the acquisition of a new hand movement with speech. Thirty participants adapted a ballistic hand movement to a visuomotor rotation either in isolation or while producing a word in time with their movements. Within participants, the after-effect of adaptation (i.e., the motor memory) was examined with or without co-incident speech. After-effects were greater for hand movements produced in the context in which adaptation occurred—i.e., with or without speech. In a second experiment, thirty new participants adapted a hand movement while saying the words “tap” or “hit”. After-effects were greater when hand movements occurred with the specific word produced during adaptation. Results demonstrate that memories of co-speech hand movements are partially tied to the speech they are learned with. The findings have implications for theories of sensorimotor control and our understanding of the relationship between gestures, speech, and meaning
    corecore