14,214 research outputs found

    Improved status following behavioural intervention in a case of severe dysarthria with stroke aetiology

    Get PDF
    There is little published intervention outcome literature concerning dysarthria acquired from stroke. Single case studies have the potential to provide more detailed specification and interpretation than is generally possible with larger participant numbers and are thus informative for clinicians who may deal with similar cases. Such research also contributes to the future planning of larger scale investigations. Behavioural intervention is described which was carried out with a man with severe dysarthria following stroke, beginning at seven and ending at nine months after stroke. Pre-intervention stability between five and seven months contrasted with significant improvements post-intervention on listener-rated measures of word and reading intelligibility and communication effectiveness in conversation. A range of speech analyses were undertaken (comprising of rate, pause and intonation characteristics in connected speech and phonetic transcription of single word production), with the aim of identifying components of speech which might explain the listeners’ perceptions of improvement. Pre- and post intervention changes could be detected mainly in parameters related to utterance segmentation and intonation. The basis of improvement in dysarthria following intervention is complex, both in terms of the active therapeutic dimensions and also the specific speech alterations which account for changes to intelligibility and effectiveness. Single case results are not necessarily generalisable to other cases and outcomes may be affected by participant factors and therapeutic variables, which are not readily controllable

    Contributions of local speech encoding and functional connectivity to audio-visual speech perception

    Get PDF
    Seeing a speaker’s face enhances speech intelligibility in adverse environments. We investigated the underlying network mechanisms by quantifying local speech representations and directed connectivity in MEG data obtained while human participants listened to speech of varying acoustic SNR and visual context. During high acoustic SNR speech encoding by temporally entrained brain activity was strong in temporal and inferior frontal cortex, while during low SNR strong entrainment emerged in premotor and superior frontal cortex. These changes in local encoding were accompanied by changes in directed connectivity along the ventral stream and the auditory-premotor axis. Importantly, the behavioral benefit arising from seeing the speaker’s face was not predicted by changes in local encoding but rather by enhanced functional connectivity between temporal and inferior frontal cortex. Our results demonstrate a role of auditory-frontal interactions in visual speech representations and suggest that functional connectivity along the ventral pathway facilitates speech comprehension in multisensory environments

    Delta(but not theta)-band cortical entrainment involves speech-specific processing

    Get PDF
    First published: 28 December 2017Cortical oscillations phase-align to the quasi-rhythmic structure of the speech envelope. This speech-brain entrainment has been reported in two frequency bands, that is both in the theta band (4-8 Hz) and in the delta band (<4 Hz). However, it is not clear if these two phenomena reflect passive synchronization of the auditory cortex to the acoustics of the speech input, or if they reflect higher processes involved in actively parsing speech information. Here, we report two magnetoencephalography experiments in which we contrasted cortical entrainment to natural speech compared to qualitative different control conditions (Experiment 1: amplitude-modulated white-noise; Experiment 2: spectrally rotated speech). We computed the coherence between the oscillatory brain activity and the envelope of the auditory stimuli. At the sensor-level, we observed increased coherence for the delta and the theta band for all conditions in bilateral brain regions. However, only in the delta band (but not theta), speech entrainment was stronger than either of the control auditory inputs. Source reconstruction in the delta band showed that speech, compared to the control conditions, elicited larger coherence in the right superior temporal and left inferior frontal regions. In the theta band, no differential effects were observed for the speech compared to the control conditions. These results suggest that whereas theta entrainment mainly reflects perceptual processing of the auditory signal, delta entrainment involves additional higher-order computations in the service of language processing.This work was partially supported by the Spanish Ministry of Economy and Competitiveness (MINECO), the Agencia Estatal de Investigaci on (AEI) and the Fondo Europeo de Desarrollo Regional FEDER) (grant PSI2015-65694- P, ‘Severo Ochoa’ programme SEV-2015-490 for Centres of Excellence in R&D), the Basque Government (grant PI_2016_1_0014), the ANR-10- LABX-0087 IEC and the ANR-10-IDEX-0001-02 PSL*. Further support was provided by the AThEME project funded by the European Commission 7th Framework Programme, the ERC-2011-ADG-295362 from the European Research Council. We would like to thank Margaret Gillon-Dowens and Sara Guediche for comments on previous versions of this article and Mathieu Bourguignon for useful advice on the present project. We would like to thank the whole BCBL research centre for the constant support for our research

    Why the Left Hemisphere Is Dominant for Speech Production: Connecting the Dots

    Get PDF
    Evidence from seemingly disparate areas of speech/language research is reviewed to form a unified theoretical account for why the left hemisphere is specialized for speech production. Research findings from studies investigating hemispheric lateralization of infant babbling, the primacy of the syllable in phonological structure, rhyming performance in split-brain patients, rhyming ability and phonetic categorization in children diagnosed with developmental apraxia of speech, rules governing exchange errors in spoonerisms, organizational principles of neocortical control of learned motor behaviors, and multi-electrode recordings of human neuronal responses to speech sounds are described and common threads highlighted. It is suggested that the emergence, in developmental neurogenesis, of a hard-wired, syllabically-organized, neural substrate representing the phonemic sound elements of one’s language, particularly the vocalic nucleus, is the crucial factor underlying the left hemisphere’s dominance for speech production

    Cognitive-semiotic mechanisms of phraseme building

    Get PDF
    The article deals with the problems of idioms production and perception related to the foresign forms emergence of information accumulation and storage contained in the cognitive-based derivation of phraseme building. It is suggested that presign stage of the semiosis process and the phraseme understanding is a cognitive model that precedes not only the formation o f the phraseme semantic structure, but its perceptio

    Synthesizing Speech from Intracranial Depth Electrodes using an Encoder-Decoder Framework

    Full text link
    Speech Neuroprostheses have the potential to enable communication for people with dysarthria or anarthria. Recent advances have demonstrated high-quality text decoding and speech synthesis from electrocorticographic grids placed on the cortical surface. Here, we investigate a less invasive measurement modality in three participants, namely stereotactic EEG (sEEG) that provides sparse sampling from multiple brain regions, including subcortical regions. To evaluate whether sEEG can also be used to synthesize high-quality audio from neural recordings, we employ a recurrent encoder-decoder model based on modern deep learning methods. We find that speech can indeed be reconstructed with correlations up to 0.8 from these minimally invasive recordings, despite limited amounts of training data

    Adult Second Language Speakers Who Pass off as Native Speakers: Seeking Plausible Explanations from a Network of Interdisciplinary Research

    Get PDF
    Normal infants and young children who are exposed to a second language over a substantial period of years in its natural interactive community grow up to speak the second language with the native accent of that language. This is a universal observation, commonly giving rise to a common belief that children ‘are better than adults at language learning’. In some cases, the second language may even replace the first language. By default, being exposed to another language after that ‘early’ age generally leads to speaking that language with a foreign accent. The common explanation for the foreign accent is brain sensory-motor maturity in neural pathways. The phenomenon of foreign accent has received and continues to attract research. On the other hand, a relatively small group of adults present a native-accent pattern. They sound native although they learned the second language at an older age, after the 'critical period' (CP) and/or under less natural contexts. This research focuses on this ‘phenomenal’ group of speakers. The rationale of the focus stems from the fact that these cases are documented in research (e.g., Munoz and Singleton, 2007 and Scovel, 1978) as partial evidence against CP age limits on the plasticity of the human brain for sound perception and sound production. Key words: foreign accent, accent free speech, adult second language speakers, brain structure, brain functio

    Context- and Prosody-Driven ERP Markers for Dialog Focus Perception in Children

    Get PDF
    The development of language proficiency extends late into childhood and includes not only producing or comprehending sounds, words and sentences, but likewise larger utterances spanning beyond sentence borders like dialogs. Dialogs consist of information units whose value constantly varies within a verbal exchange. While information is focused when introduced for the first time or corrected in order to alter the knowledge state of communication partners, the same information turns into shared knowledge during the further course of a verbal exchange. In many languages, prosodic means are used by speakers to highlight the informational value of information foci. Our study investigated the developmental pattern of event-related potentials (ERPs) in three age groups (12, 8 and 5years) when perceiving two information focus types (news and corrections) embedded in short question-answer dialogs. The information foci contained in the answer sentences were either adequately marked by prosodic means or not. In so doing, we questioned to what extent children depend on prosodic means to recognize information foci or whether contextual means as provided by dialog questions are sufficient to guide focus processing. Only 12-year-olds yield prosody-independent ERPs when encountering new and corrective information foci, resembling previous findings in adults. Focus processing in the 8-year-olds relied upon prosodic highlighting, and differing ERP responses as a function of focus type were observed. In the 5-year-olds, merely prosody-driven ERP responses were apparent, but no distinctive ERP indicating information focus recognition. Our findings reveal substantial alterations in information focus perception throughout childhood that are likely related to long-lasting maturational changes during brain developmen

    The effect of visual speech on FM-sweep evoked MEG responses

    Get PDF
    Multimodaalisuus eli useiden eri aisteista tulevan tiedon yhdistäminen yhdeksi yhtenäiseksi havainnoksi on keskushermoston yleinen ominaisuus. Tämä koskee myös audiovisuaalista toimintaa eli kuullun ja nähdyn yhdistämistä. Eräs tunnetuimmista ja vaikuttavimmista esimerkeistä tästä on niin sanottu McGurk-illuusio, jossa tiettyä tavua vastaava ääni ja eri tavua vastaava videokuva puhuvasta henkilöstä aiheuttavat kuuloaistimuksen, joka eroaa näistä kahdesta ärsykkeestä. Tässä kokeessa tutkittiin visuaalisen puheen vaikutusta formantin kaltaisten sinipyyhkäisyiden aiheuttamiin aivovasteisiin käyttämällä magnetoenkefalografiaa (MEG) tutkimusmenetelmänä. MEG mittaa aivoissa olevien sähkövirtojen pään ympärille muodostamaa magneettikenttää; tästä kentästä päätellään taas aivoissa tapahtuvat aktivaatiot. Visuaalisina puheärsykkeinä toimi joko henkilö toistamassa tavua /ba/, tavua /ga/ tai still-kuva samasta henkilöstä. Auditorisina ärsykkeinä toimi kuusi sinipyyhkäisyä, joiden alku ja lopputaajuudet olivat seuraavat: 200-700 (F1), 400-1800 (F2a), 1000-1800 (F2b), 1600-1800 (F2c), 2200-1800 (F2d) ja 2800-1800 Hz (F2e). Tutkimusoletuksena oli, että kun visuaalinen ja auditorinen ärsyke vastaisivat toisiaan, aktivaatio aivoissa olisi voimakkaampaa tai heikompaa, kun jos ne eivät vastaisi toisiaan. Myös vasteiden latenssit saattaisivat erota toisistaan. Kokeessa tuli aina sarja joko /ba/-, /ga/- tai still-tilannetta videolta, joiden aikana kuului sinipyyhkäisyjä satunnaisessa järjestyksessä. Visuaaliset tilanteet vaihtuivat myös satunnaisesti. Koehenkilöiden tuli aina visuaalisen tilanteen vaihtuessa toiseksi vastata nostamalla sormeaan. Kokeen lopputulokset olivat ristiriitaiset: kun dataa tarkasteltiin yhtenä kokonaisuutena, mitään yhteisvaikutusta nähdyn ja kuullun ärsykkeen välillä ei havaittu. Kun taas nähdyn ärsykkeen vaikutusta aivovasteisiin tutkittiin eri tilanteissa, havaittavissa saattaa olla tietyissä yksittäistapauksissa esiintyvää modulaatiota vasteiden amplitudeissa, mutta tämä on epävarmaa. Mahdollisia visuaalisia efektejä testattiin useammalla tilastollisella testillä. Eroja aktivaatioista löytyi vasemmalta puolelta aivoja. Mahdollinen interaktioefekti visuaalisen ja auditorisen ärsykkeen välillä oli olemassa, mutta tämän efektin tarkka luonne on epäselvä. Koe kuitenkin paljasti muita ärsykkeisiin liittyviä efektejä. Kuultu ääni vaikutti sekä vasemmassa että oikeassa aivopuoliskossa sekä mitattaessa amplitudia että latenssia siihen, millainen aktivaatio syntyi.Multimodality (combination of information coming from several senses as a unified perception) is a common property of central nervous system. One example of this is combination of auditory and visual information; that is, what is seen and what is heard. One of the better known and most impressive examples of this is the McGurk illusion, where a sound of a syllable and a video picture of a person pronouncing another syllable produce a completely new audio sensation, which is different from the audio and visual stimuli alone. This experiment examined the effect of visual speech on brain responses evoked by formant like sine wave sweeps using magnetoencephalography (MEG) as a research method. MEG measures the magnetic field outside the head, which is caused by electrical currents on our brains; from this magnetic field the electric current distribution inside the head is then deducted. Visual speech stimuli were either a video of a person pronouncing /ba/, pronouncing /ga/ or a still picture of the same person. Auditory stimuli were six different sine sweeps, with the following initial and final frequencies: 200-700 (F1), 400-1800 (F2a), 1000-1800 (F2b), 1600-1800 (F2c), 2200-1800 (F2d) and 2800-1800 Hz (F2e). The hypothesis was that when auditory and visual stimulus match each other, the activation in the brains would be stronger, than when they do not match each other. In the experiment, a series of /ba/-, /ga/- or still situation came from the video, during which the subject heard sound stimuli coming in a random order. The order of visual series was random. Whenever the visual series changed to another, the subject was supposed to answer by lifting a finger. The results of the experiment were contradictory: when the data was observed as a hole, no interaction effect between the visual and audio stimulus was observed. When the effect of visual stimuli in different situations was being observed, there might have been some kind of interaction effect present in isolated cases, but this is uncertain. Possible visual effects were tested with several statistical tests. Differences in activations were present in the left hemisphere. A potential interaction effect between auditory and visual stimuli was detected, but the exact nature of this effect remains unclear. The experiment did, however, reveal other effects. The heard sound affected both in the left and in the right hemisphere, with both amplitude and latency to that, which kind of activation occurred
    corecore