694 research outputs found

    Electrophysiological methods

    No full text

    A new paradigm for BCI research

    Get PDF
    A new control paradigm for Brain Computer Interfaces (BCIs) is proposed. BCIs provide a means of communication direct from the brain to a computer that allows individuals with motor disabilities an additional channel of communication and control of their external environment. Traditional BCI control paradigms use motor imagery, frequency rhythm modification or the Event Related Potential (ERP) as a means of extracting a control signal. A new control paradigm for BCIs based on speech imagery is initially proposed. Further to this a unique system for identifying correlations between components of the EEG and target events is proposed and introduced

    Ongoing EEG artifact correction using blind source separation

    Full text link
    Objective: Analysis of the electroencephalogram (EEG) for epileptic spike and seizure detection or brain-computer interfaces can be severely hampered by the presence of artifacts. The aim of this study is to describe and evaluate a fast automatic algorithm for ongoing correction of artifacts in continuous EEG recordings, which can be applied offline and online. Methods: The automatic algorithm for ongoing correction of artifacts is based on fast blind source separation. It uses a sliding window technique with overlapping epochs and features in the spatial, temporal and frequency domain to detect and correct ocular, cardiac, muscle and powerline artifacts. Results: The approach was validated in an independent evaluation study on publicly available continuous EEG data with 2035 marked artifacts. Validation confirmed that 88% of the artifacts could be removed successfully (ocular: 81%, cardiac: 84%, muscle: 98%, powerline: 100%). It outperformed state-of-the-art algorithms both in terms of artifact reduction rates and computation time. Conclusions: Fast ongoing artifact correction successfully removed a good proportion of artifacts, while preserving most of the EEG signals. Significance: The presented algorithm may be useful for ongoing correction of artifacts, e.g., in online systems for epileptic spike and seizure detection or brain-computer interfaces.Comment: 16 pages, 4 figures, 3 table

    The Use of Electroencephalography in Language Production Research: A Review

    Get PDF
    Speech production long avoided electrophysiological experiments due to the suspicion that potential artifacts caused by muscle activity of overt speech may lead to a bad signal-to-noise ratio in the measurements. Therefore, researchers have sought to assess speech production by using indirect speech production tasks, such as tacit or implicit naming, delayed naming, or meta-linguistic tasks, such as phoneme-monitoring. Covert speech may, however, involve different processes than overt speech production. Recently, overt speech has been investigated using electroencephalography (EEG). As the number of papers published is rising steadily, this clearly indicates the increasing interest and demand for overt speech research within the field of cognitive neuroscience of language. Our main goal here is to review all currently available results of overt speech production involving EEG measurements, such as picture naming, Stroop naming, and reading aloud. We conclude that overt speech production can be successfully studied using electrophysiological measures, for instance, event-related brain potentials (ERPs). We will discuss possible relevant components in the ERP waveform of speech production and aim to address the issue of how to interpret the results of ERP research using overt speech, and whether the ERP components in language production are comparable to results from other fields

    The Relative Contribution of High-Gamma Linguistic Processing Stages of Word Production, and Motor Imagery of Articulation in Class Separability of Covert Speech Tasks in EEG Data

    Get PDF
    Word production begins with high-Gamma automatic linguistic processing functions followed by speech motor planning and articulation. Phonetic properties are processed in both linguistic and motor stages of word production. Four phonetically dissimilar phonemic structures “BA”, “FO”, “LE”, and “RY” were chosen as covert speech tasks. Ten neurologically healthy volunteers with the age range of 21–33 participated in this experiment. Participants were asked to covertly speak a phonemic structure when they heard an auditory cue. EEG was recorded with 64 electrodes at 2048 samples/s. Initially, one-second trials were used, which contained linguistic and motor imagery activities. The four-class true positive rate was calculated. In the next stage, 312 ms trials were used to exclude covert articulation from analysis. By eliminating the covert articulation stage, the four-class grand average classification accuracy dropped from 96.4% to 94.5%. The most valuable features emerge after Auditory cue recognition (~100 ms post onset), and within the 70–128 Hz frequency range. The most significant identified brain regions were the Prefrontal Cortex (linked to stimulus driven executive control), Wernicke’s area (linked to Phonological code retrieval), the right IFG, and Broca’s area (linked to syllabification). Alpha and Beta band oscillations associated with motor imagery do not contain enough information to fully reflect the complexity of speech movements. Over 90% of the most class-dependent features were in the 30-128 Hz range, even during the covert articulation stage. As a result, compared to linguistic functions, the contribution of motor imagery of articulation in class separability of covert speech tasks from EEG data is negligible

    Unspoken Speech - Speech Recognition based on Electroencephalography

    Get PDF

    Evoked potentials to syllable perception and production

    Get PDF
    The purpose of this study was to devise and test a methodology to investigate cortical activity during speech perception and production that eliminates some of the confounding variables that have existed in previous research, such as the variability of stimuli employed across conditions and the role of muscle artifacts; and to clarify the role of auditory feedback in speech. Evoked potentials to the speech stimulus /ba/ were obtained while the subject was hearing and speaking /ba/ with and without immediate air conducted feedback, as well as while hearing /ba/ 0.6 sec. after each of these three conditions. Twelve adults were determined to have dominant left hemispheres through a series of seven hemispheric dominance tests. None had a history of a hearing deficiency or indicated a hearing loss during the practice session. Monopolar recordings were made from the left and right frontal areas corresponding to Broca's area on the left and the left and right temporoparietal areas posterior to the termination of the Sylvian fissure, with a linked earlobe reference

    Response-Locked Brain Dynamics of Word Production

    Get PDF
    International audienceThe cortical regions involved in the different stages of speech production are relatively well-established, but their spatio-temporal dynamics remain poorly understood. In particular, the available studies have characterized neural events with respect to the onset of the stimulus triggering a verbal response. The core aspect of language production, however, is not perception but action. In this context, the most relevant question may not be how long after a stimulus brain events happen, but rather how long before the production act do they occur. We investigated speech production-related brain activity time-locked to vocal onset, in addition to the common stimulus-locked approach. We report the detailed temporal interplay between medial and left frontal activities occurring shortly before vocal onset. We interpret those as reflections of, respectively, word selection and word production processes. This medial-lateral organization is in line with that described in non-linguistic action control, suggesting that similar processes are at play in word production and non-linguistic action production. This novel view of the brain dynamics underlying word production provides a useful background for future investigations of the spatio-temporal brain dynamics that lead to the production of verbal responses. Citation: Riès S, Janssen N, Burle B, Alario F-X (2013) Response-Locked Brain Dynamics of Word Production. PLoS ONE 8(3): e58197

    Speech Processes for Brain-Computer Interfaces

    Get PDF
    Speech interfaces have become widely used and are integrated in many applications and devices. However, speech interfaces require the user to produce intelligible speech, which might be hindered by loud environments, concern to bother bystanders or the general in- ability to produce speech due to disabilities. Decoding a usera s imagined speech instead of actual speech would solve this problem. Such a Brain-Computer Interface (BCI) based on imagined speech would enable fast and natural communication without the need to actually speak out loud. These interfaces could provide a voice to otherwise mute people. This dissertation investigates BCIs based on speech processes using functional Near In- frared Spectroscopy (fNIRS) and Electrocorticography (ECoG), two brain activity imaging modalities on opposing ends of an invasiveness scale. Brain activity data have low signal- to-noise ratio and complex spatio-temporal and spectral coherence. To analyze these data, techniques from the areas of machine learning, neuroscience and Automatic Speech Recog- nition are combined in this dissertation to facilitate robust classification of detailed speech processes while simultaneously illustrating the underlying neural processes. fNIRS is an imaging modality based on cerebral blood flow. It only requires affordable hardware and can be set up within minutes in a day-to-day environment. Therefore, it is ideally suited for convenient user interfaces. However, the hemodynamic processes measured by fNIRS are slow in nature and the technology therefore offers poor temporal resolution. We investigate speech in fNIRS and demonstrate classification of speech processes for BCIs based on fNIRS. ECoG provides ideal signal properties by invasively measuring electrical potentials artifact- free directly on the brain surface. High spatial resolution and temporal resolution down to millisecond sampling provide localized information with accurate enough timing to capture the fast process underlying speech production. This dissertation presents the Brain-to- Text system, which harnesses automatic speech recognition technology to decode a textual representation of continuous speech from ECoG. This could allow to compose messages or to issue commands through a BCI. While the decoding of a textual representation is unparalleled for device control and typing, direct communication is even more natural if the full expressive power of speech - including emphasis and prosody - could be provided. For this purpose, a second system is presented, which directly synthesizes neural signals into audible speech, which could enable conversation with friends and family through a BCI. Up to now, both systems, the Brain-to-Text and synthesis system are operating on audibly produced speech. To bridge the gap to the final frontier of neural prostheses based on imagined speech processes, we investigate the differences between audibly produced and imagined speech and present first results towards BCI from imagined speech processes. This dissertation demonstrates the usage of speech processes as a paradigm for BCI for the first time. Speech processes offer a fast and natural interaction paradigm which will help patients and healthy users alike to communicate with computers and with friends and family efficiently through BCIs
    corecore