205 research outputs found

    Robot navigation using brain-computer interfaces

    Get PDF

    Biomimetic Based EEG Learning for Robotics Complex Grasping and Dexterous Manipulation

    Get PDF
    There have been tremendous efforts to understand the biological nature of human grasping, in such a way that it can be learned and copied to prosthesis–robotics and dextrous grasping applications. Several biomimetic methods and techniques have been adopted, hence applied to analytically comprehend ways human performs grasping to duplicate human knowledge. A major topic for further study, is related to decoding the resulting EEG brainwaves during motorizing of fingers and moving parts. To accomplish this, there are a number of phases that are performed, including recording, pre-processing, filtration, and understanding of the waves. However, there are two important phases that have received substantial research attentions. The classification and decoding, of such massive and complex brain waves, as they are two important steps towards understanding patterns during grasping. In this respect, the fundamental objective of this research is to demonstrate how to employ advanced pattern recognition methods, like fuzzy c-mean clustering for understanding resulting EEG brain waves, in such a way to control a prosthesis or robotic hand, while relying sets of detected EEG brainwaves. There are a number of decoding and classification methods and techniques, however we shall look into fuzzy based clustering blended with principle component analysis (PAC) technique to help for the decoding mechanism. EEG brainwaves during a grasping and manipulation have been used for this analysis. This involves, movement of almost five fingers during a grasping defined task. The study has found that, it is not a straight forward task to decode all human fingers motions, as due to the complexity of grasping tasks. However, the adopted analysis was able to classify and identify the different narrowly performed and related fundamental events during a simple grasping task

    Brain-computer interface for robot control with eye artifacts for assistive applications

    Get PDF
    Human-robot interaction is a rapidly developing field and robots have been taking more active roles in our daily lives. Patient care is one of the fields in which robots are becoming more present, especially for people with disabilities. People with neurodegenerative disorders might not consciously or voluntarily produce movements other than those involving the eyes or eyelids. In this context, Brain-Computer Interface (BCI) systems present an alternative way to communicate or interact with the external world. In order to improve the lives of people with disabilities, this paper presents a novel BCI to control an assistive robot with user's eye artifacts. In this study, eye artifacts that contaminate the electroencephalogram (EEG) signals are considered a valuable source of information thanks to their high signal-to-noise ratio and intentional generation. The proposed methodology detects eye artifacts from EEG signals through characteristic shapes that occur during the events. The lateral movements are distinguished by their ordered peak and valley formation and the opposite phase of the signals measured at F7 and F8 channels. This work, as far as the authors' knowledge, is the first method that used this behavior to detect lateral eye movements. For the blinks detection, a double-thresholding method is proposed by the authors to catch both weak blinks as well as regular ones, differentiating itself from the other algorithms in the literature that normally use only one threshold. Real-time detected events with their virtual time stamps are fed into a second algorithm, to further distinguish between double and quadruple blinks from single blinks occurrence frequency. After testing the algorithm offline and in realtime, the algorithm is implemented on the device. The created BCI was used to control an assistive robot through a graphical user interface. The validation experiments including 5 participants prove that the developed BCI is able to control the robot

    Brain Computer Interface for Gesture Control of a Social Robot: an Offline Study

    Full text link
    Brain computer interface (BCI) provides promising applications in neuroprosthesis and neurorehabilitation by controlling computers and robotic devices based on the patient's intentions. Here, we have developed a novel BCI platform that controls a personalized social robot using noninvasively acquired brain signals. Scalp electroencephalogram (EEG) signals are collected from a user in real-time during tasks of imaginary movements. The imagined body kinematics are decoded using a regression model to calculate the user-intended velocity. Then, the decoded kinematic information is mapped to control the gestures of a social robot. The platform here may be utilized as a human-robot-interaction framework by combining with neurofeedback mechanisms to enhance the cognitive capability of persons with dementia.Comment: Presented in: 25th Iranian Conference on Electrical Engineering (ICEE

    Steering a Tractor by Means of an EMG-Based Human-Machine Interface

    Get PDF
    An electromiographic (EMG)-based human-machine interface (HMI) is a communication pathway between a human and a machine that operates by means of the acquisition and processing of EMG signals. This article explores the use of EMG-based HMIs in the steering of farm tractors. An EPOC, a low-cost human-computer interface (HCI) from the Emotiv Company, was employed. This device, by means of 14 saline sensors, measures and processes EMG and electroencephalographic (EEG) signals from the scalp of the driver. In our tests, the HMI took into account only the detection of four trained muscular events on the driver’s scalp: eyes looking to the right and jaw opened, eyes looking to the right and jaw closed, eyes looking to the left and jaw opened, and eyes looking to the left and jaw closed. The EMG-based HMI guidance was compared with manual guidance and with autonomous GPS guidance. A driver tested these three guidance systems along three different trajectories: a straight line, a step, and a circumference. The accuracy of the EMG-based HMI guidance was lower than the accuracy obtained by manual guidance, which was lower in turn than the accuracy obtained by the autonomous GPS guidance; the computed standard deviations of error to the desired trajectory in the straight line were 16 cm, 9 cm, and 4 cm, respectively. Since the standard deviation between the manual guidance and the EMG-based HMI guidance differed only 7 cm, and this difference is not relevant in agricultural steering, it can be concluded that it is possible to steer a tractor by an EMG-based HMI with almost the same accuracy as with manual steering

    Sensor-based artificial intelligence to support people with cognitive and physical disorders

    Get PDF
    A substantial portion of the world's population deals with disability. Many disabled people do not have equal access to healthcare, education, and employment opportunities, do not receive specific disability-related services, and experience exclusion from everyday life activities. One way to face these issues is through the use of healthcare technologies. Unfortunately, there is a large amount of diverse and heterogeneous disabilities, which require ad-hoc and personalized solutions. Moreover, the design and implementation of effective and efficient technologies is a complex and expensive process involving challenging issues, including usability and acceptability. The work presented in this thesis aims to improve the current state of technologies available to support people with disorders affecting the mind or the motor system by proposing the use of sensors coupled with signal processing methods and artificial intelligence algorithms. The first part of the thesis focused on mental state monitoring. We investigated the application of a low-cost portable electroencephalography sensor and supervised learning methods to evaluate a person's attention. Indeed, the analysis of attention has several purposes, including the diagnosis and rehabilitation of children with attention-deficit/hyperactivity disorder. A novel dataset was collected from volunteers during an image annotation task, and used for the experimental evaluation using different machine learning techniques. Then, in the second part of the thesis, we focused on addressing limitations related to motor disability. We introduced the use of graph neural networks to process high-density electromyography data for upper limbs amputees’ movement/grasping intention recognition for enabling the use of robotic prostheses. High-density electromyography sensors can simultaneously acquire electromyography signals from different parts of the muscle, providing a large amount of spatio-temporal information that needs to be properly exploited to improve recognition accuracy. The investigation of the approach was conducted using a recent real-world dataset consisting of electromyography signals collected from 20 volunteers while performing 65 different gestures. In the final part of the thesis, we developed a prototype of a versatile interactive system that can be useful to people with different types of disabilities. The system can maintain a food diary for frail people with nutrition problems, such as people with neurocognitive diseases or frail elderly people, which may have difficulties due to forgetfulness or physical issues. The novel architecture automatically recognizes the preparation of food at home, in a privacy-preserving and unobtrusive way, exploiting air quality data acquired from a commercial sensor, statistical features extraction, and a deep neural network. A robotic system prototype is used to simplify the interaction with the inhabitant. For this work, a large dataset of annotated sensor data acquired over a period of 8 months from different individuals in different homes was collected. Overall, the results achieved in the thesis are promising, and pave the way for several real-world implementations and future research directions

    Movement intention detection using neural network for quadriplegic assistive machine

    Get PDF
    Biomedical signal lately have been a hot topic for researchers, as many journals and books related to it have been publish. In this paper, the control strategy to help quadriplegic patient using Brain Computer Interface (BCI) on basis of Electroencephalography (EEG) signal was used. BCI is a technology that obtain user's thought to control a machine or device. This technology has enabled people with quadriplegia or in other words a person who had lost the capability of his four limbs to move by himself again. Within the past years, many researchers have come out with a new method and investigation to develop a machine that can fulfill the objective for quadriplegic patient to move again. Besides that, due to the development of bio-medical and healthcare application, there are several ways that can be used to extract signal from the brain. One of them is by using EEG signal. This research is carried out in order to detect the brain signal to controlling the movement of the wheelchair by using a single channel EEG headset. A group of 5 healthy people was chosen in order to determine performance of the machine during dynamic focusing activity such as the intention to move a wheelchair and stopping it. A neural network classifier was then used to classify the signal based on major EEG signal ranges. As a conclusion, a good neural network configuration and a decent method of extracting EEG signal will lead to give a command to control robotic wheelchair

    Intentional binding enhances hybrid BCI control

    Full text link
    Mental imagery-based brain-computer interfaces (BCIs) allow to interact with the external environment by naturally bypassing the musculoskeletal system. Making BCIs efficient and accurate is paramount to improve the reliability of real-life and clinical applications, from open-loop device control to closed-loop neurorehabilitation. By promoting sense of agency and embodiment, realistic setups including multimodal channels of communication, such as eye-gaze, and robotic prostheses aim to improve BCI performance. However, how the mental imagery command should be integrated in those hybrid systems so as to ensure the best interaction is still poorly understood. To address this question, we performed a hybrid EEG-based BCI experiment involving healthy volunteers enrolled in a reach-and-grasp action operated by a robotic arm. Main results showed that the hand grasping motor imagery timing significantly affects the BCI accuracy as well as the spatiotemporal brain dynamics. Higher control accuracy was obtained when motor imagery is performed just after the robot reaching, as compared to before or during the movement. The proximity with the subsequent robot grasping favored intentional binding, led to stronger motor-related brain activity, and primed the ability of sensorimotor areas to integrate information from regions implicated in higher-order cognitive functions. Taken together, these findings provided fresh evidence about the effects of intentional binding on human behavior and cortical network dynamics that can be exploited to design a new generation of efficient brain-machine interfaces.Comment: 18 pages, 5 figures, 7 supplementary material

    Brain Computer Interfaces, a Review

    Get PDF
    The brain is in many respects the centre of our being, controlling our actions, movements, thoughts and emotions. It is somewhat of a mystery, presenting itself only through the body's exterior façade. It is safeguarded by a thick skull that insulates it from the outside world. Information from the surroundings is relayed to it via the five senses - touch, sight, sound, smell and taste. Its role was underestimated in the past by cardiocentrists who believed that thought, sensation and behaviour originated in the heart and that the brain was there to "make the heat and boiling in the heart well blent and tempered"-Aristotle(384-322BC). Today we can pinpoint the areas of activity in the brain, and localise its functions. We now have an understanding of the physiological processes and signals that occur. We know that neurons in the brain's cortex transmit signals to an efferent nervous system, i.e. from the brain towards motor output pathways, and also from an afferent system, i.e. from the sense organs to the brain. The impulses are both electrical and chemical signals, which can be detected and measured as with any system using specific techniques. As with any system if there are problems we want to solve them, if there are improvements to be made we implement them where possible. Problems can occur in the afferent system, and also in the efferent system, for example causing visual or auditory impediments in the first case and paralysis in the latter. By effectively "bypassing" the nervous system the brain can be connected in a more direct sense to its environment. Brain computer interfaces offer this possibility. Their origins lie in providing alternative communication methods for the disabled, but now offer the possibility of providing people with "different senses". These augmentative channels will allow the brain to directly connect to its environment. Electrical signals originating from the brain can be directly sent to computers, providing it with an additional control output. This can even be connected to the Internet to provide an "extended nervous system" for controlling a robot hundreds of miles away. Conversely new and different senses such as ultrasound or infrared detection could be relayed as sensory information to the brain. In the exciting information age we live in today, integrating technology with our biological systems seems like a natural progression, and obtaining a "silicon nervous system" may by the only way to keep up! The Driving Forc

    Manual 3D Control of an Assistive Robotic Manipulator Using Alpha Rhythms and an Auditory Menu:A Proof-of-Concept

    Get PDF
    Brain–Computer Interfaces (BCIs) have been regarded as potential tools for individuals with severe motor disabilities, such as those with amyotrophic lateral sclerosis, that render interfaces that rely on movement unusable. This study aims to develop a dependent BCI system for manual end-point control of a robotic arm. A proof-of-concept system was devised using parieto-occipital alpha wave modulation and a cyclic menu with auditory cues. Users choose a movement to be executed and asynchronously stop said action when necessary. Tolerance intervals allowed users to cancel or confirm actions. Eight able-bodied subjects used the system to perform a pick-and-place task. To investigate the potential learning effects, the experiment was conducted twice over the course of two consecutive days. Subjects obtained satisfactory completion rates (84.0 ± 15.0% and 74.4 ± 34.5% for the first and second day, respectively) and high path efficiency (88.9 ± 11.7% and 92.2 ± 9.6%). Subjects took on average 439.7 ± 203.3 s to complete each task, but the robot was only in motion 10% of the time. There was no significant difference in performance between both days. The developed control scheme provided users with intuitive control, but a considerable amount of time is spent waiting for the right target (auditory cue). Implementing other brain signals may increase its speed
    • …
    corecore