157 research outputs found
User Experience May be Producing Greater Heart Rate Variability than Motor Imagery Related Control Tasks during the User-System Adaptation in Brain-Computer Interfaces
Brain-computer interface (BCI) is technology that is developing fast, but it remains inaccurate, unreliable and slow due to the difficulty to obtain precise information from the brain. Consequently, the involvement of other biosignals to decode the user control tasks has risen in importance. A traditional way to operate a BCI system is via motor imagery (MI) tasks. As imaginary movements activate similar cortical structures and vegetative mechanisms as a voluntary movement does, heart rate variability (HRV) has been proposed as a parameter to improve the detection of MI related control tasks. However, HR is very susceptible to body needs and environmental demands, and as BCI systems require high levels of attention, perceptual processing and mental workload, it is important to assess the practical effectiveness of HRV. The present study aimed to determine if brain and heart electrical signals (HRV) are modulated by MI activity used to control a BCI system, or if HRV is modulated by the user perceptions and responses that result from the operation of a BCI system (i.e., user experience). For this purpose, a database of 11 participants who were exposed to eight different situations was used. The sensory-cognitive load (intake and rejection tasks) was controlled in those situations. Two electrophysiological signals were utilized: electroencephalography and electrocardiography. From those biosignals, event-related (de-)synchronization maps and event-related HR changes were respectively estimated. The maps and the HR changes were cross-correlated in order to verify if both biosignals were modulated due to MI activity. The results suggest that HR varies according to the experience undergone by the user in a BCI working environment, and not because of the MI activity used to operate the system
Brain-Computer Interfaces for HCI and Games
In this workshop we study the research themes and the state-of-the-art of brain-computer interaction. Braincomputer interface research has seen much progress in the medical domain, for example for prosthesis control or as biofeedback therapy for the treatment of neurological disorders. Here, however, we look at brain-computer interaction especially as it applies to research in Human-Computer Interaction (HCI). Through this workshop and continuing discussions, we aim to define research approaches and applications that apply to disabled and able-bodied users across a variety of real-world usage scenarios. Entertainment and game design is one of the application areas that will be considered
Electrocorticogram as the Basis for a Direct Brain Interface: Opportunities for Improved Detection Accuracy
A direct brain interface (DBI) based on the detection of event-related potentials (ERPs) in human electrocorticogram (ECoG) is under development. Accurate detection has been demonstrated with this approach (near 100% on a few channels) using a single-channel cross-correlation template matching (CCTM) method. Several opportunities for improved detection accuracy have been identified. Detection using a multiple-channel CCTM method and a variety of detection methods that take advantage of the simultaneous occurrence of ERPs and event-related desynchronization/synchronization (ERD/ERS) have been demonstrated to offer potential for improved detection accuracy.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85993/1/Fessler183.pd
Brain-computer interfaces for hci and games
We study the research themes and the state-of-the-art of brain-computer interaction. Brain-computer interface research has seen much progress in the medical domain, for example for prosthesis control or as biofeedback therapy for the treatment of neurological disorders. Here, however, we look at brain-computer interaction especially as it applies to research in Human-Computer Interaction (HCI). Through this workshop and continuing discussions, we aim to define research approaches and applications that apply to disabled and able-bodied users across a variety of real-world usage scenarios. Entertainment and game design is one of the application areas that will be considered
Anticipatory models of human movements and dynamics: the roadmap of the AnDy project
International audienceFuture robots will need more and more anticipation capabilities, to properly react to human actions and provide efficient collaboration. To achieve this goal, we need new technologies that not only estimate the motion of the humans, but that fully describe the whole-body dynamics of the interaction and that can also predict its outcome. These hardware and software technologies are the goal of the European project AnDy. In this paper, we describe the roadmap of AnDy, which leverages existing technologies to endow robots with the ability to control physical collaboration through intentional interaction. To achieve this goal, AnDy relies on three technological and scientific breakthroughs. First, AnDy will innovate the way of measuring human whole-body motions by developing the wearable AnDySuit, which tracks motions and records forces. Second, AnDy will develop the AnDyModel, which combines ergonomic models with cognitive predictive models of human dynamic behavior in collaborative tasks, learned from data acquired with the AnDySuit. Third, AnDy will propose AnDyControl, an innovative technology for assisting humans through pre-dictive physical control, based on AnDyModel. By measuring and modeling human whole-body dynamics, AnDy will provide robots with a new level of awareness about human intentions and ergonomy. By incorporating this awareness on-line in the robot's controllers, AnDy paves the way for novel applications of physical human-robot collaboration in manufacturing, health-care, and assisted living
Predicting mental imagery based BCI performance from personality, cognitive profile and neurophysiological patterns
Mental-Imagery based Brain-Computer Interfaces (MI-BCIs) allow their users to send commands
to a computer using their brain-activity alone (typically measured by ElectroEncephaloGraphy—
EEG), which is processed while they perform specific mental tasks. While very
promising, MI-BCIs remain barely used outside laboratories because of the difficulty
encountered by users to control them. Indeed, although some users obtain good control
performances after training, a substantial proportion remains unable to reliably control an
MI-BCI. This huge variability in user-performance led the community to look for predictors of
MI-BCI control ability. However, these predictors were only explored for motor-imagery
based BCIs, and mostly for a single training session per subject. In this study, 18 participants
were instructed to learn to control an EEG-based MI-BCI by performing 3 MI-tasks, 2
of which were non-motor tasks, across 6 training sessions, on 6 different days. Relationships
between the participants’ BCI control performances and their personality, cognitive
profile and neurophysiological markers were explored. While no relevant relationships with
neurophysiological markers were found, strong correlations between MI-BCI performances
and mental-rotation scores (reflecting spatial abilities) were revealed. Also, a predictive
model of MI-BCI performance based on psychometric questionnaire scores was proposed.
A leave-one-subject-out cross validation process revealed the stability and reliability of this
model: it enabled to predict participants’ performance with a mean error of less than 3
points. This study determined how users’ profiles impact their MI-BCI control ability and
thus clears the way for designing novel MI-BCI training protocols, adapted to the profile of
each user
Brain enhancement through cognitive training: A new insight from brain connectome
Owing to the recent advances in neurotechnology and the progress in understanding of brain cognitive functions, improvements of cognitive performance or acceleration of learning process with brain enhancement systems is not out of our reach anymore, on the contrary, it is a tangible target of contemporary research. Although a variety of approaches have been proposed, we will mainly focus on cognitive training interventions, in which learners repeatedly perform cognitive tasks to improve their cognitive abilities. In this review article, we propose that the learning process during the cognitive training can be facilitated by an assistive system monitoring cognitive workloads using electroencephalography (EEG) biomarkers, and the brain connectome approach can provide additional valuable biomarkers for facilitating leaners' learning processes. For the purpose, we will introduce studies on the cognitive training interventions, EEG biomarkers for cognitive workload, and human brain connectome. As cognitive overload and mental fatigue would reduce or even eliminate gains of cognitive training interventions, a real-time monitoring of cognitive workload can facilitate the learning process by flexibly adjusting difficulty levels of the training task. Moreover, cognitive training interventions should have effects on brain sub-networks, not on a single brain region, and graph theoretical network metrics quantifying topological architecture of the brain network can differentiate with respect to individual cognitive states as well as to different individuals' cognitive abilities, suggesting that the connectome is a valuable approach for tracking the learning progress. Although only a few studies have exploited the connectome approach for studying alterations of the brain network induced by cognitive training interventions so far, we believe that it would be a useful technique for capturing improvements of cognitive function
Using a motor imagery questionnaire to estimate the performance of a Brain–Computer Interface based on object oriented motor imagery
<p>Objectives: The primary objective was to test whether motor imagery (MI) questionnaires can be used to detect BCI ‘illiterate’. The second objective was to test how different MI paradigms, with and without the physical presence of the goal of an action, influence a BCI classifier.</p>
<p>Methods: Kinaesthetic (KI) and visual (VI) motor imagery questionnaires were administered to 30 healthy volunteers. Their EEG was recorded during a cue-based, simple imagery (SI) and goal oriented imagery (GOI).</p>
<p>Results: The strongest correlation (Pearson r2 = 0.53, p = 1.6e-5) was found between KI and SI, followed by a moderate correlation between KI and GOI (r2 = 0.33, p = 0.001) and a weak correlation between VI and SI (r2 = 0.21, p = 0.022) and VI and GOI (r2 = 0.17, p = 0.05). Classification accuracy was similar for SI (71.1 ± 7.8%) and GOI (70.5 ± 5.9%) though corresponding classification features differed in 70% participants. Compared to SI, GOI improved the classification accuracy in ‘poor’ imagers while reducing the classification accuracy in ‘very good’ imagers.</p>
<p>Conclusion: The KI score could potentially be a useful tool to predict the performance of a MI based BCI. The physical presence of the object of an action facilitates motor imagination in ‘poor’ able-bodied imagers.</p>
<p>Significance: Although this study shows results on able-bodied people, its general conclusions should be transferable to BCI based on MI for assisted rehabilitation of the upper extremities in patients.</p>
Perception and cognition of cues Used in synchronous Brain–computer interfaces Modify electroencephalographic Patterns of control Tasks
A motor imagery (MI)-based brain–computer interface (BCI) is a system that enables humans to interact with their environment by translating their brain signals into control commands for a target device. In particular, synchronous BCI systems make use of cues to trigger the motor activity of interest. So far, it has been shown that electroencephalographic (EEG) patterns before and after cue onset can reveal the user cognitive state and enhance the discrimination of MI-related control tasks. However, there has been no detailed investigation of the nature of those EEG patterns. We, therefore, propose to study the cue effects on MI-related control tasks by selecting EEG patterns that best discriminate such control tasks, and analyzing where those patterns are coming from. The study was carried out using two methods: standard and all-embracing. The standard method was based on sources (recording sites, frequency bands, and time windows), where the modulation of EEG signals due to motor activity is typically detected. The all-embracing method included a wider variety of sources, where not only motor activity is reflected. The findings of this study showed that the classification accuracy (CA) of MI-related control tasks did not depend on the type of cue in use. However, EEG patterns that best differentiated those control tasks emerged from sources well defined by the perception and cognition of the cue in use. An implication of this study is the possibility of obtaining different control commands that could be detected with the same accuracy. Since different cues trigger control tasks that yield similar CAs, and those control tasks produce EEG patterns differentiated by the cue nature, this leads to accelerate the brain–computer communication by having a wider variety of detectable control commands. This is an important issue for Neuroergonomics research because neural activity could not only be used to monitor the human mental state as is typically done, but this activity might be also employed to control the system of interest
Defining brain–machine interface applications by matching interface performance with device requirements
Interaction with machines is mediated by human-machine interfaces (HMIs). Brain-machine interfaces (BMIs) are a particular class of HMIs and have so far been studied as a communication means for people who have little or no voluntary control of muscle activity. In this context, low-performing interfaces can be considered as prosthetic applications. On the other hand, for able-bodied users, a BMI would only be practical if conceived as an augmenting interface. In this paper, a method is introduced for pointing out effective combinations of interfaces and devices for creating real-world applications. First, devices for domotics, rehabilitation and assistive robotics, and their requirements, in terms of throughput and latency, are described. Second, HMIs are classified and their performance described, still in terms of throughput and latency. Then device requirements are matched with performance of available interfaces. Simple rehabilitation and domotics devices can be easily controlled by means of BMI technology. Prosthetic hands and wheelchairs are suitable applications but do not attain optimal interactivity. Regarding humanoid robotics, the head and the trunk can be controlled by means of BMIs, while other parts require too much throughput. Robotic arms, which have been controlled by means of cortical invasive interfaces in animal studies, could be the next frontier for non-invasive BMIs. Combining smart controllers with BMIs could improve interactivity and boost BMI applications. © 2007 Elsevier B.V. All rights reserved
- …