94 research outputs found
High Aptitude Motor Imagery BCI Users Have Better Visuospatial Memory
Brain computer interfaces (BCI) decode the electrophysiological signals from
the brain into an action that is carried out by a computer or robotic device.
Motor imagery BCIs (MI BCI) rely on the user s imagination of bodily movements,
however not all users can generate the brain activity needed to control MI BCI.
This difference in MI BCI performance among novice users could be due to their
cognitive abilities. In this study, the impact of spatial abilities and
visuospatial memory on MI BCI performance is investigated. Fifty four novice
users participated in a MI BCI task and two cognitive tests. The impact of
spatial abilities and visuospatial memory on BCI task error rate in three
feedback sessions was measured. Our results showed that spatial abilities, as
assessed by the Mental Rotation Test, were not related to MI BCI performance,
however visuospatial memory, assessed by the design organization test, was
higher in high aptitude users. Our findings can contribute to optimization of
MI BCI training paradigms through participant screening and cognitive skill
training.Comment: Accepted in IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND
CYBERNETICS (SMC2020
Improving User Experience and Performance through Gamification of MI-BCI Training
Motor Imagery Brain-Computer Interfaces (MI-BCI) decode brain patterns associated with motor intentions into control commands for a variety of applications, bypassing traditional motor inputs. To use these systems, the user must produce identifiable and stable MI patterns, which requires multiple training sessions in a lab. However, MI-BCI training protocols are often repetitive and suboptimal as some users remain incapable of BCI control. This problem, known as BCI illiteracy/deficiency, has been related to psychological and cognitive factors such as motivation and attention. While some studies have tried to improve users’ MI skills and BCI performance through enriched feedback or motor priming, a unified protocol that considers various aspects of user training has not yet been introduced. The current study aims to develop a more user-centered MI-BCI training protocol by implementing principles from human-computer interaction and game design. Through a systematic review, we examine how gamification of user training can improve user experience and BCI performance. Here, gamification refers to the use of game elements such as interactive objects, goals, and rewards, which can make BCI training more engaging, motivating, and effective. A potential platform for such a BCI training game is virtual reality (VR). Not only does VR offer richer, immersive feedback during BCI training, it can also embody the user into a virtual character, giving them more agency over virtual movements performed with the BCI. We discuss how virtual environments have been used in MI-BCI training in combination with gamification, and introduce empirical studies that can further incorporate and test a gamified VR MI-BCI training protocol. An overview of effective design principles for MI-BCI training can provide future BCI researchers and developers with a framework for creating more engaging and effective protocols that reduce the BCI inefficiency problem and accelerate the technology’s mainstream adoption
End-to-End Deep Transfer Learning for Calibration-free Motor Imagery Brain Computer Interfaces
A major issue in Motor Imagery Brain-Computer Interfaces (MI-BCIs) is their
poor classification accuracy and the large amount of data that is required for
subject-specific calibration. This makes BCIs less accessible to general users
in out-of-the-lab applications. This study employed deep transfer learning for
development of calibration-free subject-independent MI-BCI classifiers. Unlike
earlier works that applied signal preprocessing and feature engineering steps
in transfer learning, this study adopted an end-to-end deep learning approach
on raw EEG signals. Three deep learning models (MIN2Net, EEGNet and
DeepConvNet) were trained and compared using an openly available dataset. The
dataset contained EEG signals from 55 subjects who conducted a left- vs.
right-hand motor imagery task. To evaluate the performance of each model, a
leave-one-subject-out cross validation was used. The results of the models
differed significantly. MIN2Net was not able to differentiate right- vs.
left-hand motor imagery of new users, with a median accuracy of 51.7%. The
other two models performed better, with median accuracies of 62.5% for EEGNet
and 59.2% for DeepConvNet. These accuracies do not reach the required threshold
of 70% needed for significant control, however, they are similar to the
accuracies of these models when tested on other datasets without transfer
learning
Brain-Computer Interface and Motor Imagery Training: The Role of Visual Feedback and Embodiment
Controlling a brain-computer interface (BCI) is a difficult task that requires extensive training. Particularly in the case of motor imagery BCIs, users may need several training sessions before they learn how to generate desired brain activity and reach an acceptable performance. A typical training protocol for such BCIs includes execution of a motor imagery task by the user, followed by presentation of an extending bar or a moving object on a computer screen. In this chapter, we discuss the importance of a visual feedback that resembles human actions, the effect of human factors such as confidence and motivation, and the role of embodiment in the learning process of a motor imagery task. Our results from a series of experiments in which users BCI-operated a humanlike android robot confirm that realistic visual feedback can induce a sense of embodiment, which promotes a significant learning of the motor imagery task in a short amount of time. We review the impact of humanlike visual feedback in optimized modulation of brain activity by the BCI users
Robot-Assisted Mindfulness Practice: Analysis of Neurophysiological Responses and Affective State Change
Mindfulness is the state of paying attention to the present moment on purpose
and meditation is the technique to obtain this state. This study aims to
develop a robot assistant that facilitates mindfulness training by means of a
Brain Computer Interface (BCI) system. To achieve this goal, we collected EEG
signals from two groups of subjects engaging in a meditative vs. nonmeditative
human robot interaction (HRI) and evaluated cerebral hemispheric asymmetry,
which is recognized as a well defined indicator of emotional states. Moreover,
using self reported affective states, we strived to explain asymmetry changes
based on pre and post experiment mood alterations. We found that unlike earlier
meditation studies, the frontocentral activations in alpha and theta frequency
bands were not influenced by robot guided mindfulness practice, however there
was a significantly greater right sided activity in the occipital gamma band of
Meditation group, which is attributed to increased sensory awareness and open
monitoring. In addition, there was a significant main effect of Time on
participants self reported affect, indicating an improved mood after
interaction with the robot regardless of the interaction type. Our results
suggest that EEG responses during robot-guided meditation hold promise in
realtime detection and neurofeedback of mindful state to the user, however the
experienced neurophysiological changes may differ based on the meditation
practice and recruited tools. This study is the first to report EEG changes
during mindfulness practice with a robot. We believe that our findings driven
from an ecologically valid setting, can be used in development of future BCI
systems that are integrated with social robots for health applications.Comment: accepted for conference RoMAN202
Investigating the Impact of a Dual Musical Brain-Computer Interface on Interpersonal Synchrony: A Pilot Study
This study looked into how effective a Musical Brain-Computer Interface
(MBCI) can be in providing feedback about synchrony between two people. Using a
double EEG setup, we compared two types of musical feedback; one that adapted
in real-time based on the inter-brain synchrony between participants
(Neuroadaptive condition), and another music that was randomly generated
(Random condition). We evaluated how these two conditions were perceived by 8
dyads (n = 16) and whether the generated music could influence the perceived
connection and EEG synchrony between them. The findings indicated that
Neuroadaptive musical feedback could potentially boost synchrony levels between
people compared to Random feedback, as seen by a significant increase in EEG
phase-locking values. Additionally, the real-time measurement of synchrony was
successfully validated and musical neurofeedback was generally well-received by
the participants. However, more research is needed for conclusive results due
to the small sample size. This study is a stepping stone towards creating music
that can audibly reflect the level of synchrony between individuals.Comment: 6 pages, 4 figure
A realistic, multimodal virtual agent for the healthcare domain
We introduce an interactive embodied conversational agent for deployment in the healthcare sector. The agent is operated by a software architecture that integrates speech recognition, dialog management, and speech synthesis, and is embodied by a virtual human face developed using photogrammetry techniques. These features together allow for real-time, face-to-face interactions with human users. Although the developed software architecture is domain-independent and highly customizable, the virtual agent will initially be applied to healtcare domain. Here we give an overview of the different components of the architecture
A novel hybrid SWARA and VIKOR methodology for supplier selection in an agile environment
The concept of the agile supply chain has been taken into account as means of achieving a high competitive edge in rapidly changing business environments. Supply partner selection is one of the most appealing issues for agile supply chain management, which have recently been studied by academicians and practitioners. Due to a large number of factors to be considered, supplier selection process is a difficult task for every company. Therefore, supplier selection process can be viewed as a multiple attribute decision-making (MADM) problem. In this paper, a novel hybrid MADM method is proposed for agile supplier selection based on four criteria including performance, cost, flexibility and technology. Two MADM methods, including step-wise weight assessment ratio analysis (SWARA) and Vlse Kriterijumska Optimizacija I Kompromisno Resenje (VIKOR) are applied in decision-making process. More precisely, SWARA is used for determining the importance of each criterion and calculating their weights and VIKOR is applied for evaluating alternatives as well as ranking supplier alternatives from the best to the worst. More precisely, the first phase of the proposed methodology, step-wise weight assessment ratio analysis (SWARA), is useful for determining the importance of each criterion and calculating the weight of each criterion, and the second phase with Vlse Kriterijumska Optimizacija I Kompromisno Resenje (VIKOR) is useful for evaluating alternatives as well as ranking supplier alternatives from the best to the worst. Finally, a real case-study is presented to demonstrate the applicability of the proposed methodology. As a result, the model can help managers to evaluate and select the best supplier regarding own company strategies, resources, policies and etc. for their organization
- …