81 research outputs found
Prefrontal High Gamma in ECoG Tags Periodicity of Musical Rhythms in Perception and Imagination
Rhythmic auditory stimuli are known to elicit matching activity patterns in neural populations. Furthermore, recent research has established the particular importance of high-gamma brain activity in auditory processing by showing its involvement in auditory phrase segmentation and envelope tracking. Here, we use electrocorticographic (ECoG) recordings from eight human listeners to see whether periodicities in high-gamma activity track the periodicities in the envelope of musical rhythms during rhythm perception and imagination. Rhythm imagination was elicited by instructing participants to imagine the rhythm to continue during pauses of several repetitions. To identify electrodes whose periodicities in high-gamma activity track the periodicities in the musical rhythms, we compute the correlation between the autocorrelations (ACCs) of both the musical rhythms and the neural signals. A condition in which participants listened to white noise was used to establish a baseline. High-gamma autocorrelations in auditory areas in the superior temporal gyrus and in frontal areas on both hemispheres significantly matched the autocorrelations of the musical rhythms. Overall, numerous significant electrodes are observed on the right hemisphere. Of particular interest is a large cluster of electrodes in the right prefrontal cortex that is active during both rhythm perception and imagination. This indicates conscious processing of the rhythms\u27 structure as opposed to mere auditory phenomena. The autocorrelation approach clearly highlights that high-gamma activity measured from cortical electrodes tracks both attended and imagined rhythms
Transient Signals and Inattentional Blindness in a Multi-Object Tracking Task
Inattentional blindness is a failure to notice an unexpected event when attention is directed elsewhere. The current study examined participants\u27 awareness of an unexpected object that maintained luminance contrast, switched the luminance once, or repetitively flashed. One hundred twenty participants performed a dynamic tracking task on a computer monitor for which they were instructed to count the number of movement deflections of an attended set of objects while ignoring other objects. On the critical trial, an unexpected cross that did not change its luminance (control condition), switched its luminance once (switch condition), or repetitively flashed (flash condition) traveled across the stimulus display. Participants noticed the unexpected cross more frequently when the luminance feature matched their attention set than when it did not match. Unexpectedly, however, a proportion of the participants who noticed the cross in the switch and flash conditions were statistically comparable. The results suggest that an unexpected object with even a single luminance change can break inattentional blindness in a multi-object tracking task
Discrimination of Overt, Mouthed, and Imagined Speech Activity using Stereotactic EEG
Recent studies have demonstrated that it is possible to decode and synthesize acoustic speech directly from intracranial measurements of brain activity. A current major challenge is to extend the efficacy of this decoding to imagined speech processes toward the development of a practical speech neuroprosthesis for the disabled. The present study used intracranial brain recordings from participants that performed a speaking task consisting of overt, mouthed, and imagined speech trials. In order to better elucidate the unique neural features that contribute to the discrepancies between overt and imagined model performance, rather than directly comparing the performance of speech decoding models trained on respective speaking modes, this study developed and trained models that use neural data to discriminate between pairs of speaking modes. The results further support that, while there exists a common neural substrate across speech modes, there are also unique neural processes that differentiate speech modes
Direct Classification of All American English Phonemes Using Signals From Functional Speech Motor Cortex
Although brain-computer interfaces (BCIs) can be used in several different ways to restore communication, communicative BCI has not approached the rate or efficiency of natural human speech. Electrocorticography (ECoG) has precise spatiotemporal resolution that enables recording of brain activity distributed over a wide area of cortex, such as during speech production. In this study, we investigated words that span the entire set of phonemes in the General American accent using ECoG with 4 subjects. We classified phonemes with up to 36% accuracy when classifying all phonemes and up to 63% accuracy for a single phoneme. Further, misclassified phonemes follow articulation organization described in phonology literature, aiding classification of whole words. Precise temporal alignment to phoneme onset was crucial for classification success. We identified specific spatiotemporal features that aid classification, which could guide future applications. Word identification was equivalent to information transfer rates as high as 3.0 bits/s (33.6 words min), supporting pursuit of speech articulation for BCI control
A brain-computer interface with vibrotactile biofeedback for haptic information
<p>Abstract</p> <p>Background</p> <p>It has been suggested that Brain-Computer Interfaces (BCI) may one day be suitable for controlling a neuroprosthesis. For closed-loop operation of BCI, a tactile feedback channel that is compatible with neuroprosthetic applications is desired. Operation of an EEG-based BCI using only <it>vibrotactile feedback</it>, a commonly used method to convey haptic senses of contact and pressure, is demonstrated with a high level of accuracy.</p> <p>Methods</p> <p>A Mu-rhythm based BCI using a motor imagery paradigm was used to control the position of a virtual cursor. The cursor position was shown visually as well as transmitted haptically by modulating the intensity of a vibrotactile stimulus to the upper limb. A total of six subjects operated the BCI in a two-stage targeting task, receiving only vibrotactile biofeedback of performance. The location of the vibration was also systematically varied between the left and right arms to investigate location-dependent effects on performance.</p> <p>Results and Conclusion</p> <p>Subjects are able to control the BCI using only vibrotactile feedback with an average accuracy of 56% and as high as 72%. These accuracies are significantly higher than the 15% predicted by random chance if the subject had no voluntary control of their Mu-rhythm. The results of this study demonstrate that vibrotactile feedback is an effective biofeedback modality to operate a BCI using motor imagery. In addition, the study shows that placement of the vibrotactile stimulation on the biceps ipsilateral or contralateral to the motor imagery introduces a significant bias in the BCI accuracy. This bias is consistent with a drop in performance generated by stimulation of the contralateral limb. Users demonstrated the capability to overcome this bias with training.</p
Comparison of eye tracking, electrooculography and an auditory brain-computer interface for binary communication: a case study with a participant in the locked-in state
Background In this study, we evaluated electrooculography (EOG), an eye tracker and an auditory brain-computer interface (BCI) as access methods to augmentative and alternative communication (AAC). The participant of the study has been in the locked-in state (LIS) for 6 years due to amyotrophic lateral sclerosis. He was able to communicate with slow residual eye movements, but had no means of partner independent communication. We discuss the usability of all tested access methods and the prospects of using BCIs as an assistive technology.
Methods Within four days, we tested whether EOG, eye tracking and a BCI would allow the participant in LIS to make simple selections. We optimized the parameters in an iterative procedure for all systems.
Results The participant was able to gain control over all three systems. Nonetheless, due to the level of proficiency previously achieved with his low-tech AAC method, he did not consider using any of the tested systems as an additional communication channel. However, he would consider using the BCI once control over his eye muscles would no longer be possible. He rated the ease of use of the BCI as the highest among the tested systems, because no precise eye movements were required; but also as the most tiring, due to the high level of attention needed to operate the BCI.
Conclusions In this case study, the partner based communication was possible due to the good care provided and the proficiency achieved by the interlocutors. To ease the transition from a low-tech AAC method to a BCI once control over all muscles is lost, it must be simple to operate. For persons, who rely on AAC and are affected by a progressive neuromuscular disease, we argue that a complementary approach, combining BCIs and standard assistive technology, can prove valuable to achieve partner independent communication and ease the transition to a purely BCI based approach. Finally, we provide further evidence for the importance of a user-centered approach in the design of new assistive devices
Toward a model-based predictive controller design in brain-computer interfaces
A first step in designing a robust and optimal model-based predictive controller (MPC) for brain-computer interface (BCI) applications is presented in this article. An MPC has the potential to achieve improved BCI performance compared to the performance achieved by current ad hoc, nonmodel-based filter applications. The parameters in designing the controller were extracted as model-based features from motor imagery task-related human scalp electroencephalography. Although the parameters can be generated from any model-linear or non-linear, we here adopted a simple autoregressive model that has well-established applications in BCI task discriminations. It was shown that the parameters generated for the controller design can as well be used for motor imagery task discriminations with performance (with 8-23% task discrimination errors) comparable to the discrimination performance of the commonly used features such as frequency specific band powers and the AR model parameters directly used. An optimal MPC has significant implications for high performance BCI applications.Grants K25NS061001
(MK) and K02MH01493 (SJS) from the National
Institute of Neurological Disorders And Stroke
(NINDS) and the National Institute of Mental Health
(NIMH), the Portuguese Foundation for Science and
Technology (FCT) Grant SFRH/BD/21529/2005
(NSD), the Pennsylvania Department of Community
and Economic Development Keystone Innovation
Zone Program Fund (SJS), and the Pennsylvania
Department of Health using Tobacco Settlement Fund
(SJS)
A Tutorial on EEG Signal Processing Techniques for Mental State Recognition in Brain-Computer Interfaces
International audienceThis chapter presents an introductory overview and a tutorial of signal processing techniques that can be used to recognize mental states from electroencephalographic (EEG) signals in Brain-Computer Interfaces. More particularly, this chapter presents how to extract relevant and robust spectral, spatial and temporal information from noisy EEG signals (e.g., Band Power features, spatial filters such as Common Spatial Patterns or xDAWN, etc.), as well as a few classification algorithms (e.g., Linear Discriminant Analysis) used to classify this information into a class of mental state. It also briefly touches on alternative, but currently less used approaches. The overall objective of this chapter is to provide the reader with practical knowledge about how to analyse EEG signals as well as to stress the key points to understand when performing such an analysis
- …