14 research outputs found

    A Wireless Multifunctional SSVEP-Based Brain Computer Interface Assistive System

    Full text link
    IEEE Several kinds of brain-computer interface (BCI) systems have been proposed to compensate for the lack of medical technology for assisting patients who lose the ability to use motor functions to communicate with the outside world. However, most of the proposed systems are limited by their non-portability, impracticality and inconvenience because of the adoption of wired or invasive electroencephalography (EEG) acquisition devices. Another common limitation is the shortage of functions provided because of the difficulty of integrating multiple functions into one BCI system. In this study, we propose a wireless, non-invasive and multifunctional assistive system which integrates steady state visually evoked potential (SSVEP)-based BCI and a robotic arm to assist patients to feed themselves. Patients are able to control the robotic arm via the BCI to serve themselves food. Three other functions: video entertainment, video calling, and active interaction are also integrated. This is achieved by designing a functional menu and integrating multiple subsystems. A refinement decision-making mechanism is incorporated to ensure the accuracy and applicability of the system. Fifteen participants were recruited to validate the usability and performance of the system. The averaged accuracy and information transfer rate (ITR) achieved is 90.91% and 24.94 bit per min respectively. The feedback from the participants demonstrates that this assistive system is able to significantly improve the quality of daily life

    Noncontact brain-computer interface based on steady-state pupil light reflex using independent bilateral eyes stimulation

    Get PDF
    Steady-state visual evoked potential (SSVEP), which uses blinking light stimulation to estimate the attending target, has been known as a communication technique with severe motor disabilities such as ALS and Locked-in-syndrome. Recently, it was reported that pupil diameter vibration based on pupillary light reflex has been observed in the attending target with a constant blinking frequency. This fact suggests the possibility of a noncontact BCI using pupillometers as alternatives to contacting scalp electrodes. In this study, we show an increment in the number of communication channels by stimulating both eyes alone or in combination with different frequencies. The number of selective targets becomes twice the number of frequencies using this method. Experiments are conducted by recruiting three healthy participants. We prepare six target patterns comprising three frequencies and detect the target using a coefficient of correlation of power spectrum between the pupil diameter and stimulus signal. Consequently, the average classification accuracy of the three participants of approximately 83.4% is achieved. The findings of this study demonstrate the feasibility of noncontact BCI systems

    An SSVEP Stimuli Design using Real-time Camera View with Object Recognition

    Full text link
    © 2020 IEEE. Most SSVEP-based stimuli BCIs are pre-defined using the white blocks. This kind of scenario lead less flexibility in the real life. To represent the flickers with the location, types and configurations of the objects in real world, this paper proposes an SSVEP-based BCI using real-time camera view with object recognition algorithm to provide intuitive BCI for users. A deep learning-based object recognition algorithm is used to calculate the location of the objects on the online camera view from a depth camera. After the bounding box of the objects is estimated, the location of the SSVEP flickers are designed to overlap on the object locations. An overlapping FFT and SVM is used to recognize the EEG signals into corresponding classes. In experimental results, the classification rate for camera view scenario is more than 94.1%. The results show that proposed SSVEP stimuli design is available to create an intuitive and reliable human machine interaction. The proposed results can be used for the users who have motor disabilities to further used to interact with assistive devices, such as: robotic arm and wheelchairs

    Cognitive conflict in virtual reality based object selection task : an EEG study to understand brain dynamics associated with cognitive conflict in a virtual reality 3D object selection task

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Cognitive conflict is an essential part of everyday interaction with the environment and is often characterized as a brain’s action monitoring and control system that activates when prediction based on previous experience acquired from the environment does not match with derived knowledge from sensory inputs from cognitive processing. Although cognitive conflict can be seen as an essential part of learning about the environment, it requires the brain to assign a higher number of cognitive resources such as attention, memory, and engagement compared to non-conflicting conditions. In this work, cognitive conflict has been evaluated in a three-dimensional (3D) object selection task in a virtual reality environment by assessing, evaluating, and understanding the factors of visual appearance, task completion time, movement velocity during interaction and its implications for a sense of agency, and presence in a virtual reality (VR) environment. An electroencephalogram (EEG)-based approach along with behavioral information is used. The results show that the amplitude of negative event-related potential (50-150 ms), defined as prediction error negativity (PEN), correlates with the realism of the rendering style of virtual hands during the interaction. It was also found that PEN amplitudes are significantly more pronounced in slow trials than fast trials. Based on these findings, a closed-loop BCI system has been designed to assess the effect of cognitive conflict in 3D object selection and provide the matrices which can improve users’ feelings of a sense of agency towards VR. These findings suggest that a realistic representation of the user’s hand, compatible task completion time and hand movement velocity are essential components for the better integration of information from both visual and proprioceptive systems during the interaction to avoid cognitive conflict due to a mismatch between action and expected feedback. The findings also suggest that the assessment of cognitive conflict measured by PEN can improve the overall experience of the 3D object selection task in a VR environment. Collectively, these findings provide a glimpse of understanding into how the brain dynamics behind interaction works and its implications in assessment for the content development industries in VR

    Multimodal fuzzy fusion for enhancing the motor-imagery-based brain computer interface

    Get PDF
    © 2005-2012 IEEE. Brain-computer interface technologies, such as steady-state visually evoked potential, P300, and motor imagery are methods of communication between the human brain and the external devices. Motor imagery-based brain-computer interfaces are popular because they avoid unnecessary external stimuli. Although feature extraction methods have been illustrated in several machine intelligent systems in motor imagery-based brain-computer interface studies, the performance remains unsatisfactory. There is increasing interest in the use of the fuzzy integrals, the Choquet and Sugeno integrals, that are appropriate for use in applications in which fusion of data must consider possible data interactions. To enhance the classification accuracy of brain-computer interfaces, we adopted fuzzy integrals, after employing the classification method of traditional brain-computer interfaces, to consider possible links between the data. Subsequently, we proposed a novel classification framework called the multimodal fuzzy fusion-based brain-computer interface system. Ten volunteers performed a motor imagery-based brain-computer interface experiment, and we acquired electroencephalography signals simultaneously. The multimodal fuzzy fusion-based brain-computer interface system enhanced performance compared with traditional brain-computer interface systems. Furthermore, when using the motor imagery-relevant electroencephalography frequency alpha and beta bands for the input features, the system achieved the highest accuracy, up to 78.81% and 78.45% with the Choquet and Sugeno integrals, respectively. Herein, we present a novel concept for enhancing brain-computer interface systems that adopts fuzzy integrals, especially in the fusion for classifying brain-computer interface commands

    A Closed-Loop AR-based BCI for Real-World System Control

    Full text link

    Fully portable and wireless universal brain-machine interfaces enabled by flexible scalp electronics and deep-learning algorithm

    Get PDF
    Variation in human brains creates difficulty in implementing electroencephalography (EEG) into universal brain-machine interfaces (BMI). Conventional EEG systems typically suffer from motion artifacts, extensive preparation time, and bulky equipment, while existing EEG classification methods require training on a per-subject or per-session basis. Here, we introduce a fully portable, wireless, flexible scalp electronic system, incorporating a set of dry electrodes and flexible membrane circuit. Time domain analysis using convolutional neural networks allows for an accurate, real-time classification of steady-state visually evoked potentials on the occipital lobe. Simultaneous comparison of EEG signals with two commercial systems captures the improved performance of the flexible electronics with significant reduction of noise and electromagnetic interference. The two-channel scalp electronic system achieves a high information transfer rate (122.1 ± 3.53 bits per minute) with six human subjects, allowing for a wireless, real-time, universal EEG classification for an electronic wheelchair, motorized vehicle, and keyboard-less presentation

    Cross-Platform Implementation of an SSVEP-Based BCI for the Control of a 6-DOF Robotic Arm

    Full text link
    [EN] Robotics has been successfully applied in the design of collaborative robots for assistance to people with motor disabilities. However, man-machine interaction is difficult for those who suffer severe motor disabilities. The aim of this study was to test the feasibility of a low-cost robotic arm control system with an EEG-based brain-computer interface (BCI). The BCI system relays on the Steady State Visually Evoked Potentials (SSVEP) paradigm. A cross-platform application was obtained in C++. This C++ platform, together with the open-source software Openvibe was used to control a Staubli robot arm model TX60. Communication between Openvibe and the robot was carried out through the Virtual Reality Peripheral Network (VRPN) protocol. EEG signals were acquired with the 8-channel Enobio amplifier from Neuroelectrics. For the processing of the EEG signals, Common Spatial Pattern (CSP) filters and a Linear Discriminant Analysis classifier (LDA) were used. Five healthy subjects tried the BCI. This work allowed the communication and integration of a well-known BCI development platform such as Openvibe with the specific control software of a robot arm such as Staubli TX60 using the VRPN protocol. It can be concluded from this study that it is possible to control the robotic arm with an SSVEP-based BCI with a reduced number of dry electrodes to facilitate the use of the system.Funding for open access charge: Universitat Politecnica de Valencia.Quiles Cucarella, E.; Dadone, J.; Chio, N.; GarcĂ­a Moreno, E. (2022). Cross-Platform Implementation of an SSVEP-Based BCI for the Control of a 6-DOF Robotic Arm. Sensors. 22(13):1-26. https://doi.org/10.3390/s22135000126221

    A Human-Centric Metaverse Enabled by Brain-Computer Interface: A Survey

    Full text link
    The growing interest in the Metaverse has generated momentum for members of academia and industry to innovate toward realizing the Metaverse world. The Metaverse is a unique, continuous, and shared virtual world where humans embody a digital form within an online platform. Through a digital avatar, Metaverse users should have a perceptual presence within the environment and can interact and control the virtual world around them. Thus, a human-centric design is a crucial element of the Metaverse. The human users are not only the central entity but also the source of multi-sensory data that can be used to enrich the Metaverse ecosystem. In this survey, we study the potential applications of Brain-Computer Interface (BCI) technologies that can enhance the experience of Metaverse users. By directly communicating with the human brain, the most complex organ in the human body, BCI technologies hold the potential for the most intuitive human-machine system operating at the speed of thought. BCI technologies can enable various innovative applications for the Metaverse through this neural pathway, such as user cognitive state monitoring, digital avatar control, virtual interactions, and imagined speech communications. This survey first outlines the fundamental background of the Metaverse and BCI technologies. We then discuss the current challenges of the Metaverse that can potentially be addressed by BCI, such as motion sickness when users experience virtual environments or the negative emotional states of users in immersive virtual applications. After that, we propose and discuss a new research direction called Human Digital Twin, in which digital twins can create an intelligent and interactable avatar from the user's brain signals. We also present the challenges and potential solutions in synchronizing and communicating between virtual and physical entities in the Metaverse
    corecore