99 research outputs found

    Brain Computer Interface based Robot Design for Physically Disabled Person

    Get PDF
    There are number of physically handicapped people. Some of them are using different technologies to move around. The proposed work implements a robot which is controlled using human brain attention.Here brain signals analyzes using electrode sensor that monitors the eye blinks and attention level. Brain wave sensor that detects these EEG signals is transmitting through Bluetooth medium. Level analyzer unit (LAU) i.e. computer system will receive the raw signals and process using MATLAB platform. According to human attention level robot will move. ARM controller is used to design robot

    Brain Computer Interface for Gesture Control of a Social Robot: an Offline Study

    Full text link
    Brain computer interface (BCI) provides promising applications in neuroprosthesis and neurorehabilitation by controlling computers and robotic devices based on the patient's intentions. Here, we have developed a novel BCI platform that controls a personalized social robot using noninvasively acquired brain signals. Scalp electroencephalogram (EEG) signals are collected from a user in real-time during tasks of imaginary movements. The imagined body kinematics are decoded using a regression model to calculate the user-intended velocity. Then, the decoded kinematic information is mapped to control the gestures of a social robot. The platform here may be utilized as a human-robot-interaction framework by combining with neurofeedback mechanisms to enhance the cognitive capability of persons with dementia.Comment: Presented in: 25th Iranian Conference on Electrical Engineering (ICEE

    Sensor-based artificial intelligence to support people with cognitive and physical disorders

    Get PDF
    A substantial portion of the world's population deals with disability. Many disabled people do not have equal access to healthcare, education, and employment opportunities, do not receive specific disability-related services, and experience exclusion from everyday life activities. One way to face these issues is through the use of healthcare technologies. Unfortunately, there is a large amount of diverse and heterogeneous disabilities, which require ad-hoc and personalized solutions. Moreover, the design and implementation of effective and efficient technologies is a complex and expensive process involving challenging issues, including usability and acceptability. The work presented in this thesis aims to improve the current state of technologies available to support people with disorders affecting the mind or the motor system by proposing the use of sensors coupled with signal processing methods and artificial intelligence algorithms. The first part of the thesis focused on mental state monitoring. We investigated the application of a low-cost portable electroencephalography sensor and supervised learning methods to evaluate a person's attention. Indeed, the analysis of attention has several purposes, including the diagnosis and rehabilitation of children with attention-deficit/hyperactivity disorder. A novel dataset was collected from volunteers during an image annotation task, and used for the experimental evaluation using different machine learning techniques. Then, in the second part of the thesis, we focused on addressing limitations related to motor disability. We introduced the use of graph neural networks to process high-density electromyography data for upper limbs amputees’ movement/grasping intention recognition for enabling the use of robotic prostheses. High-density electromyography sensors can simultaneously acquire electromyography signals from different parts of the muscle, providing a large amount of spatio-temporal information that needs to be properly exploited to improve recognition accuracy. The investigation of the approach was conducted using a recent real-world dataset consisting of electromyography signals collected from 20 volunteers while performing 65 different gestures. In the final part of the thesis, we developed a prototype of a versatile interactive system that can be useful to people with different types of disabilities. The system can maintain a food diary for frail people with nutrition problems, such as people with neurocognitive diseases or frail elderly people, which may have difficulties due to forgetfulness or physical issues. The novel architecture automatically recognizes the preparation of food at home, in a privacy-preserving and unobtrusive way, exploiting air quality data acquired from a commercial sensor, statistical features extraction, and a deep neural network. A robotic system prototype is used to simplify the interaction with the inhabitant. For this work, a large dataset of annotated sensor data acquired over a period of 8 months from different individuals in different homes was collected. Overall, the results achieved in the thesis are promising, and pave the way for several real-world implementations and future research directions

    Wearable Brain-Computer Interface Instrumentation for Robot-Based Rehabilitation by Augmented Reality

    Get PDF
    An instrument for remote control of the robot by wearable brain-computer interface (BCI) is proposed for rehabilitating children with attention-deficit/hyperactivity disorder (ADHD). Augmented reality (AR) glasses generate flickering stimuli, and a single-channel electroencephalographic BCI detects the elicited steady-state visual evoked potentials (SSVEPs). This allows benefiting from the SSVEP robustness by leaving available the view of robot movements. Together with the lack of training, a single channel maximizes the device's wearability, fundamental for the acceptance by ADHD children. Effectively controlling the movements of a robot through a new channel enhances rehabilitation engagement and effectiveness. A case study at an accredited rehabilitation center on ten healthy adult subjects highlighted an average accuracy higher than 83%, with information transfer rate (ITR) up to 39 b/min. Preliminary further tests on four ADHD patients between six- and eight-years old provided highly positive feedback on device acceptance and attentional performance

    Implementation of EEG controlled technology to modular self-reconfigurable robot with multiple configurations

    Get PDF
    The electroencephalogram (EEG) implementation has reached a new level in terms of application that is for the Brain Computer Interfaces (BCI) system and not restricted for medical instrumentation only. The concept of Modular self- Reconfigurable (MSR) robot control can be identified in most of science fictional movies. The implementations of both technologies to each other will act as a frontier for new alternatives that improve self-reconfigurable modular robots in terms of the control strategy. The main problem is that the EEG-based BCI system is always implemented for mobile robots, robot manipulators, and sometimes on humanoid robots. However, it is not implemented to MSR robots, which perform their tasks cooperatively by more than one robot module. Hence, the EEG-based BCI system implementation to MSR robot is needed to ensure the high accuracy of the MSR robot controlled with the BCI system to assess multiple configuration propagations by the MSR robots regardless of external stimulation. Therefore, it is important to analyse society perspective on BCI controlled robot technologies, to establish control, and to assess multiple configurations propagate by the Dtto MSR robot based on the EEG-based BCI system. Finally, the system established needs to be analyzed in terms of versatility for the availability of training, gender, and robot state. The method proposed in our study is utilizing Lab Streaming Layer (LSL) and Python script as mediators. The system developed in our study was done by using OpenViBE software where a Motor Imagery BCI was created to receive and process the EEG data in real time. The main idea for the developed system is to associate a direction (Left, Right, Up, and Down) based on Hand and Feet Motor Imagery as a command for the Dtto MSR robot control. Based on the findings, the SVM classifier produces a better result for Motor Imagery system control accuracy. The study also shows that the EEG acquisition headset with multiple electrodes is necessary for achieving a better control accuracy for the Motor Imagery system. A deeper analysis of the versatility of the MSR robot controlled by the BCI system is based on the three factors that were decided. Highest success rate for Simulation based on Left imagery which is 27.5% and the highest success rate for Young Trained subjects which is 30%. Highest success rate for Real Robot based on Left and Right Imagery which is 37.5% and the highest success rate for Young Trained subjects which is 38.33%. The analysis result shows that “Aged” and “Robot State” are significant for the control success rate of MSR robots by the BCI system. As for the “Training Availability” factor in our study, it is not considered a significant factor on its own but it has an interaction with the other factors and influences the control success rate. Overall, it is something achievable as the BCI system was integrated to the MSR robot to control multiple robot modules in real time and produced positive result as intended even though it was not as high as expected. P300 or SSVEP brain signal can be implemented in the future for more degree of freedom control and more efficient way can be implemented for communication for BCI system to MSR robot

    Combining brain-computer interfaces and assistive technologies: state-of-the-art and challenges

    Get PDF
    In recent years, new research has brought the field of EEG-based Brain-Computer Interfacing (BCI) out of its infancy and into a phase of relative maturity through many demonstrated prototypes such as brain-controlled wheelchairs, keyboards, and computer games. With this proof-of-concept phase in the past, the time is now ripe to focus on the development of practical BCI technologies that can be brought out of the lab and into real-world applications. In particular, we focus on the prospect of improving the lives of countless disabled individuals through a combination of BCI technology with existing assistive technologies (AT). In pursuit of more practical BCIs for use outside of the lab, in this paper, we identify four application areas where disabled individuals could greatly benefit from advancements in BCI technology, namely,“Communication and Control”, “Motor Substitution”, “Entertainment”, and “Motor Recovery”. We review the current state of the art and possible future developments, while discussing the main research issues in these four areas. In particular, we expect the most progress in the development of technologies such as hybrid BCI architectures, user-machine adaptation algorithms, the exploitation of users’ mental states for BCI reliability and confidence measures, the incorporation of principles in human-computer interaction (HCI) to improve BCI usability, and the development of novel BCI technology including better EEG devices

    ZASTOSOWANIE TECHNOLOGII INTERFEJSÓW MÓZG-KOMPUTER JAKO KONTROLERA DO GIER WIDEO

    Get PDF
    Nowadays, control in video games is based on the use of a mouse, keyboard and other controllers. A Brain Computer Interface (BCI) is a special interface that allows direct communication between the brain and the appropriate external device. Brain Computer Interface technology can be used for commercial purposes, for example as a replacement for a keyboard, mouse or other controller. This article presents a method of controlling video games using the EMOTIV EPOC + Neuro Headset as a controller.W obecnych czasach sterowanie w grach wideo jest oparte na wykorzystaniu myszki, klawiatury oraz innych kontrolerów. Brain-Computer Interface w skrócie BCI to specjalny interfejs pozwalający na bezpośrednią komunikację między mózgiem, a odpowiednim urządzeniem zewnętrznym. Technologia Brain-Computer Interface może zostać użyta w celach komercyjnych na przykład jako zamiennik myszki klawiatury lub innego kontrolera. W artykule przedstawiono sposób sterowania w grach wideo przy pomocy neuro-headsetu EMOTIV EPOC+ jako kontrolera

    Brain-Computer Interface and Motor Imagery Training: The Role of Visual Feedback and Embodiment

    Get PDF
    Controlling a brain-computer interface (BCI) is a difficult task that requires extensive training. Particularly in the case of motor imagery BCIs, users may need several training sessions before they learn how to generate desired brain activity and reach an acceptable performance. A typical training protocol for such BCIs includes execution of a motor imagery task by the user, followed by presentation of an extending bar or a moving object on a computer screen. In this chapter, we discuss the importance of a visual feedback that resembles human actions, the effect of human factors such as confidence and motivation, and the role of embodiment in the learning process of a motor imagery task. Our results from a series of experiments in which users BCI-operated a humanlike android robot confirm that realistic visual feedback can induce a sense of embodiment, which promotes a significant learning of the motor imagery task in a short amount of time. We review the impact of humanlike visual feedback in optimized modulation of brain activity by the BCI users
    corecore