49 research outputs found

    Lower Beta: A Central Coordinator of Temporal Prediction in Multimodal Speech

    Get PDF
    How the brain decomposes and integrates information in multimodal speech perception is linked to oscillatory dynamics. However, how speech takes advantage of redundancy between different sensory modalities, and how this translates into specific oscillatory patterns remains unclear. We address the role of lower beta activity (~20 Hz), generally associated with motor functions, as an amodal central coordinator that receives bottom-up delta-theta copies from specific sensory areas and generate top-down temporal predictions for auditory entrainment. Dissociating temporal prediction from entrainment may explain how and why visual input benefits speech processing rather than adding cognitive load in multimodal speech perception. On the one hand, body movements convey prosodic and syllabic features at delta and theta rates (i.e., 1–3 Hz and 4–7 Hz). On the other hand, the natural precedence of visual input before auditory onsets may prepare the brain to anticipate and facilitate the integration of auditory delta-theta copies of the prosodic-syllabic structure. Here, we identify three fundamental criteria based on recent evidence and hypotheses, which support the notion that lower motor beta frequency may play a central and generic role in temporal prediction during speech perception. First, beta activity must respond to rhythmic stimulation across modalities. Second, beta power must respond to biological motion and speech-related movements conveying temporal information in multimodal speech processing. Third, temporal prediction may recruit a communication loop between motor and primary auditory cortices (PACs) via delta-to-beta cross-frequency coupling. We discuss evidence related to each criterion and extend these concepts to a beta-motivated framework of multimodal speech processing

    Activating words without language: beta and theta oscillations reflect lexical access and control processes during verbal and non-verbal object recognition tasks

    Get PDF
    The intention to name an object modulates neural responses during object recognition tasks. However, the nature of this modulation is still unclear. We established whether a core operation in language, i.e. lexical access, can be observed even when the task does not require language (size-judgment task), and whether response selection in verbal versus non-verbal semantic tasks relies on similar neuronal processes. We measured and compared neuronal oscillatory activities and behavioral responses to the same set of pictures of meaningful objects, while the type of task participants had to perform (picture-naming versus size-judgment) and the type of stimuli to measure lexical access (cognate versus non-cognate) were manipulated. Despite activation of words was facilitated when the task required explicit word-retrieval (picture-naming task), lexical access occurred even without the intention to name the object (non-verbal size-judgment task). Activation of words and response selection were accompanied by beta (25-35 Hz) desynchronization and theta (3-7 Hz) synchronization, respectively. These effects were observed in both picture-naming and size-judgment tasks, suggesting that words became activated via similar mechanisms, irrespective of whether the task involves language explicitly. This finding has important implications to understand the link between core linguistic operations and performance in verbal and non-verbal semantic tasks

    Auditory detection is modulated by theta phase of silent lip movements

    Get PDF
    Audiovisual speech perception relies, among other things, on our expertise to map a speaker's lip movements with speech sounds. This multimodal matching is facilitated by salient syllable features that align lip movements and acoustic envelope signals in the 4–8 Hz theta band. Although non-exclusive, the predominance of theta rhythms in speech processing has been firmly established by studies showing that neural oscillations track the acoustic envelope in the primary auditory cortex. Equivalently, theta oscillations in the visual cortex entrain to lip movements, and the auditory cortex is recruited during silent speech perception. These findings suggest that neuronal theta oscillations may play a functional role in organising information flow across visual and auditory sensory areas. We presented silent speech movies while participants performed a pure tone detection task to test whether entrainment to lip movements directs the auditory system and drives behavioural outcomes. We showed that auditory detection varied depending on the ongoing theta phase conveyed by lip movements in the movies. In a complementary experiment presenting the same movies while recording participants' electro-encephalogram (EEG), we found that silent lip movements entrained neural oscillations in the visual and auditory cortices with the visual phase leading the auditory phase. These results support the idea that the visual cortex entrained by lip movements filtered the sensitivity of the auditory cortex via theta phase synchronization

    Left Motor delta Oscillations Reflect Asynchrony Detection in Multisensory Speech Perception

    Get PDF
    During multisensory speech perception, slow delta oscillations (∼1 - 3 Hz) in the listener's brain synchronize with the speech signal, likely engaging in speech signal decomposition. Notable fluctuations in the speech amplitude envelope, resounding speaker prosody, temporally align with articulatory and body gestures and both provide complementary sensations that temporally structure speech. Further, delta oscillations in the left motor cortex seem to align with speech and musical beats, suggesting their possible role in the temporal structuring of (quasi)-rhythmic stimulation. We extended the role of delta oscillations to audio-visual asynchrony detection as a test case of the temporal analysis of multisensory prosody fluctuations in speech. We recorded EEG responses in an audio-visual asynchrony detection task while participants watched videos of a speaker. We filtered the speech signal to remove verbal content and examined how visual and auditory prosodic features temporally (mis-)align. Results confirm (i) that participants accurately detected audio-visual asynchrony, and (ii) increased delta power in the left motor cortex in response to audio-visual asynchrony. The difference of delta power between asynchronous and synchronous conditions predicted behavioural performance, and (iii) decreased delta-beta coupling in the left motor cortex when listeners could not accurately map visual and auditory prosodies. Finally, both behavioural and neurophysiological evidence was altered when a speaker's face was degraded by a visual mask. Together, these findings suggest that motor delta oscillations support asynchrony detection of multisensory prosodic fluctuation in speech.SIGNIFICANCE STATEMENTSpeech perception is facilitated by regular prosodic fluctuations that temporally structure the auditory signal. Auditory speech processing involves the left motor cortex and associated delta oscillations. However, visual prosody (i.e., a speaker's body movements) complements auditory prosody, and it is unclear how the brain temporally analyses different prosodic features in multisensory speech perception. We combined an audio-visual asynchrony detection task with electroencephalographic recordings to investigate how delta oscillations support the temporal analysis of multisensory speech. Results confirmed that asynchrony detection of visual and auditory prosodies leads to increased delta power in left motor cortex and correlates with performance. We conclude that delta oscillations are invoked in an effort to resolve denoted temporal asynchrony in multisensory speech perception

    The tumoral A genotype of the MGMT rs34180180 single-nucleotide polymorphism in aggressive gliomas is associated with shorter patients' survival

    Get PDF
    Malignant gliomas are the most common primary brain tumors. Grade III and IV gliomas harboring wild-type IDH1/2 are the most aggressive. In addition to surgery and radiotherapy, concomitant and adjuvant chemotherapy with temozolomide (TMZ) significantly improves overall survival (OS). The methylation status of the O-6-methylguanine-DNA methyltransferase (MGMT) promoter is predictive of TMZ response and a prognostic marker of cancer outcome. However, the promoter regions the methylation of which correlates best with survival in aggressive glioma and whether the promoter methylation status predictive value could be refined or improved by other MGMT-associated molecular markers are not precisely known. In a cohort of 87 malignant gliomas treated with radiotherapy and TMZ-based chemotherapy, we retrospectively determined the MGMT promoter methylation status, genotyped single nucleotide polymorphisms (SNPs) in the promoter region and quantified MGMT mRNA expression level. Each of these variables was correlated with each other and with the patients' OS. We found that methylation of the CpG sites within MGMT exon 1 best correlated with OS and MGMT expression levels, and confirmed MGMT methylation as a stronger independent prognostic factor compared to MGMT transcription levels. Our main finding is that the presence of only the A allele at the rs34180180 SNP in the tumor was significantly associated with shorter OS, independently of the MGMT methylation status. In conclusion, in the clinic, rs34180180 SNP genotyping could improve the prognostic value of the MGMT promoter methylation assay in patients with aggressive glioma treated with TMZ.ARC -Fondation ARC pour la Recherche sur le Cancer(EML20120904843

    Outcomes among oropharyngeal and oral cavity cancer patients treated with postoperative volumetric modulated arctherapy

    Get PDF
    BackgroundPresently, there are few published reports on postoperative radiation therapy for oropharyngeal and oral cavity cancers treated with IMRT/VMAT technique. This study aimed to assess the oncological outcomes of this population treated with postoperative VMAT in our institution, with a focus on loco-regional patterns of failure.Material and methodsBetween 2011 and 2019, 167 patients were included (40% of oropharyngeal cancers, and 60% of oral cavity cancers). The median age was 60 years. There was 64.2% of stage IV cancers. All patients had both T and N surgery. 34% had a R1 margin, 42% had perineural invasion. 72% had a positive neck dissection and 42% extranodal extension (ENE). All patients were treated with VMAT with simultaneous integrated boost with three dose levels: 66Gy in case of R1 margin and/or ENE, 59.4-60Gy on the tumor bed, and 54Gy on the prophylactic areas. Concomittant cisplatin was administrated concomitantly when feasible in case of R1 and/or ENE.ResultsThe 1- and 2-year loco-regional control rates were 88.6% and 85.6% respectively. Higher tumor stage (T3/T4), the presence of PNI, and time from surgery >45 days were significant predictive factors of worse loco-regional control in multivariate analysis (p=0.02, p=0.04, and p=0.02). There were 17 local recurrences: 11 (64%) were considered as infield, 4 (24%) as marginal, and 2 (12%) as outfield. There were 9 regional recurrences only, 8 (89%) were considered as infield, and 1 (11%) as outfield. The 1- and 2-year disease-free survival (DFS) rates were 78.9% and 71.8% respectively. The 1- and 2-year overall survival (OS) rates were 88.6% and 80% respectively. Higher tumor stage (T3/T4) and the presence of ENE were the two prognostic factors significantly associated with worse DFS and OS in multivariate analysis.ConclusionOur outcomes for postoperative VMAT for oral cavity and oropharyngeal cancers are encouraging, with high rates of loco-regional control. However, the management of ENE still seems challenging

    Concevoir la ville à partir des gares, Rapport final du Projet Bahn.Ville 2 sur un urbanisme orienté vers le rail

    Get PDF
    Expérimenter de nouvelles façons de faire de l'aménagement et du développement urbain autour des gares ? C'est l'objectif du projet franco-allemand Bahn.Ville 2, recherche-action qui vise à promouvoir « un urbanisme orienté vers le rail ». Valoriser les investissements faits sur les lignes ferroviaires régionales périurbaines par des mesures d'accompagnement dans le domaine de l'urbanisme, optimiser les conditions d'accessibilité aux gares de ces lignes, améliorer la qualité du service rendu aux usagers dans les lieux d'échanges autour de ces gare telles sont les ambitions de ce projet réalisé sur la période 2007-2009. Il s'agit de tester les conditions de la mise en œuvre d'un urbanisme orienté vers le rail

    Beat gestures and speech processing: when prosody extends to the speaker's hands

    Get PDF
    Speakers naturally accompany their speech with hand gestures and extend the auditory prosody to visual modality through rapid beat gestures that help them to structure their narrative and emphasize relevant information. The present thesis aimed to investigate beat gestures and their neural correlates on the listener’s side. We developed a naturalistic approach combining political discourse presentations with neuroimaging techniques (ERPs, EEG and fMRI) and behavioral measures. The main findings of the thesis first revealed that beat-speech processing engaged language-related areas, suggesting that gestures and auditory speech are part of the same language system. Second, the presence of beats modulated the auditory processing of affiliated words around their onsets and later at phonological stages. We concluded that listeners perceive beats as visual prosody and rely on their predictive value to anticipate relevant acoustic cues of their corresponding words, engaging local attentional processes.Los gestos acompañan de manera natural el discurso de los hablantes, de esta manera, la prosodia auditiva se traslada también a la modalidad visual a través de los gestos rítmicos que ayudan al hablante a estructurar el mensaje y a enfatizar la información relevante. El objetivo principal de esta tesis fue la investigación de la percepción de los gestos rítmicos y la actividad neuronal relacionada con estos. Esta se desarrolló con un enfoque naturalístico combinando la presentación de discursos políticos con técnicas de neuroimagen (ERPs, EEG y fMRI) y medidas conductuales. Sus principales hallazgos fueron, primero, que el procesado conjunto del habla y gestos rítmicos involucraron áreas relacionadas con el lenguaje, esto sugiere que los gestos y el habla forman parte de un único sistema del lenguaje. Segundo, que los gestos rítmicos modulan el procesamiento de las palabras a las que acompañan tanto en el momento de su pronunciación como en etapas posteriores. Concluimos que los oyentes perciben los gestos rítmicos como parte de la prosodia visual y utilizan su valor predictivo para anticipar la señal acústica de la palabra a la que preceden a través de procesos locales de atención

    Auditory detection is modulated by theta phase of silent lip movements

    No full text
    Enjoying dialogues in movies comes from the effortless matching between speaker’s lip movements and speech sounds. Mouth opening indeed shares common features with auditory speech envelope, which temporally align together on dominant syllable rhythms occurring at theta rates. As previous evidence demonstrated that lips perception entrains theta oscillations in the visual cortex and recruits auditory network even in the absence of sound, we wanted to address a naturally following question: Does lips perception functionally affect auditory processing through theta oscillations entrainment? Here, we hypothesize that the lips movements shape speech perception through a general mechanism of theta synchronization between the visual and auditory systems, which can extend beyond multimodal speech. If so, behavioural outcomes relying on purely auditory processing should be driven by visual entrainment even in non-multisensory integration cases. Further, visual and auditory areas should synchronize their oscillations on theta phase to reflect visual-to-auditory systems communication. In the present study, we adapted a simple auditory tones detection paradigm during which sound tracks were accompanied with contemporary silent movies focusing on speakers’ face to demonstrate that (1) visual entrainment builds up with accumulation of lips information and needs time to tune the auditory system; (2) consequently, auditory performances improve with time as well. In a separate procedure presenting silent movies combined to electro-encephalogram recording, we demonstrated that (3) both visual and auditory areas were recruited during silent lips perception, and synchronized their theta oscillations
    corecore