12 research outputs found

    Using Variable Natural Environment Brain-Computer Interface Stimuli for Real-time Humanoid Robot Navigation

    Full text link
    This paper addresses the challenge of humanoid robot teleoperation in a natural indoor environment via a Brain-Computer Interface (BCI). We leverage deep Convolutional Neural Network (CNN) based image and signal understanding to facilitate both real-time bject detection and dry-Electroencephalography (EEG) based human cortical brain bio-signals decoding. We employ recent advances in dry-EEG technology to stream and collect the cortical waveforms from subjects while they fixate on variable Steady State Visual Evoked Potential (SSVEP) stimuli generated directly from the environment the robot is navigating. To these ends, we propose the use of novel variable BCI stimuli by utilising the real-time video streamed via the on-board robot camera as visual input for SSVEP, where the CNN detected natural scene objects are altered and flickered with differing frequencies (10Hz, 12Hz and 15Hz). These stimuli are not akin to traditional stimuli - as both the dimensions of the flicker regions and their on-screen position changes depending on the scene objects detected. On-screen object selection via such a dry-EEG enabled SSVEP methodology, facilitates the on-line decoding of human cortical brain signals, via a specialised secondary CNN, directly into teleoperation robot commands (approach object, move in a specific direction: right, left or back). This SSVEP decoding model is trained via a priori offline experimental data in which very similar visual input is present for all subjects. The resulting classification demonstrates high performance with mean accuracy of 85% for the real-time robot navigation experiment across multiple test subjects.Comment: Accepted as a full paper at the 2019 International Conference on Robotics and Automation (ICRA

    On Tackling Fundamental Constraints in Brain-Computer Interface Decoding via Deep Neural Networks

    Get PDF
    A Brain-Computer Interface (BCI) is a system that provides a communication and control medium between human cortical signals and external devices, with the primary aim to assist or to be used by patients who suffer from a neuromuscular disease. Despite significant recent progress in the area of BCI, there are numerous shortcomings associated with decoding Electroencephalography-based BCI signals in real-world environments. These include, but are not limited to, the cumbersome nature of the equipment, complications in collecting large quantities of real-world data, the rigid experimentation protocol and the challenges of accurate signal decoding, especially in making a system work in real-time. Hence, the core purpose of this work is to investigate improving the applicability and usability of BCI systems, whilst preserving signal decoding accuracy. Recent advances in Deep Neural Networks (DNN) provide the possibility for signal processing to automatically learn the best representation of a signal, contributing to improved performance even with a noisy input signal. Subsequently, this thesis focuses on the use of novel DNN-based approaches for tackling some of the key underlying constraints within the area of BCI. For example, recent technological improvements in acquisition hardware have made it possible to eliminate the pre-existing rigid experimentation procedure, albeit resulting in noisier signal capture. However, through the use of a DNN-based model, it is possible to preserve the accuracy of the predictions from the decoded signals. Moreover, this research demonstrates that by leveraging DNN-based image and signal understanding, it is feasible to facilitate real-time BCI applications in a natural environment. Additionally, the capability of DNN to generate realistic synthetic data is shown to be a potential solution in reducing the requirement for costly data collection. Work is also performed in addressing the well-known issues regarding subject bias in BCI models by generating data with reduced subject-specific features. The overall contribution of this thesis is to address the key fundamental limitations of BCI systems. This includes the unyielding traditional experimentation procedure, the mandatory extended calibration stage and sustaining accurate signal decoding in real-time. These limitations lead to a fragile BCI system that is demanding to use and only suited for deployment in a controlled laboratory. Overall contributions of this research aim to improve the robustness of BCI systems and enable new applications for use in the real-world

    Superposition model for steady state visually evoked potentials

    Get PDF
    Steady State Visually Evoked Potentials (SSVEP) are signals produced in the occipital part of the brain when someone gaze a light flickering at a fixed frequency. These signals have been used for Brain Machine Interfacing (BMI), where one or more stimuli are presented and the system has to detect what is the stimulus the user is attending to. It has been proposed that the SSVEP signal is produced by superposition of Visually Evoked Potentials (VEP) but there is not a model that shows that. We propose a model for a SSVEP signal that is a superposition of the response to the rising and falling edges of the stimuli and that can be calculated for different frequencies. This model is based in the phase between the stimulus and the SSVEP signal considering that the phase is stable over the time. We fit the model for 4 volunteers that gazed stimuli in the frequencies of 9hz, 11hz, 13hz and 15hz, and duty-cycles of 20%, 35%, 50%, 65% and 80%. We found the parameters of the model for every volunteer using the data of Oz electrode and a genetic algorithm. The proposed model is useful for find the best duty-cycle of the stimulus and it can be useful for select a code in the stimuli different for a square signal, the model only consider one frequency at the same time, but the results showed that it could be possible to find a more generic model

    Modification of Gesture-Determined-Dynamic Function with Consideration of Margins for Motion Planning of Humanoid Robots

    Full text link
    The gesture-determined-dynamic function (GDDF) offers an effective way to handle the control problems of humanoid robots. Specifically, GDDF is utilized to constrain the movements of dual arms of humanoid robots and steer specific gestures to conduct demanding tasks under certain conditions. However, there is still a deficiency in this scheme. Through experiments, we found that the joints of the dual arms, which can be regarded as the redundant manipulators, could exceed their limits slightly at the joint angle level. The performance straightly depends on the parameters designed beforehand for the GDDF, which causes a lack of adaptability to the practical applications of this method. In this paper, a modified scheme of GDDF with consideration of margins (MGDDF) is proposed. This MGDDF scheme is based on quadratic programming (QP) framework, which is widely applied to solving the redundancy resolution problems of robot arms. Moreover, three margins are introduced in the proposed MGDDF scheme to avoid joint limits. With consideration of these margins, the joints of manipulators of the humanoid robots will not exceed their limits, and the potential damages which might be caused by exceeding limits will be completely avoided. Computer simulations conducted on MATLAB further verify the feasibility and superiority of the proposed MGDDF scheme

    Goal-recognition-based adaptive brain-computer interface for navigating immersive robotic systems

    Get PDF
    © 2017 IOP Publishing Ltd. Objective. This work proposes principled strategies for self-adaptations in EEG-based Brain-computer interfaces (BCIs) as a way out of the bandwidth bottleneck resulting from the considerable mismatch between the low-bandwidth interface and the bandwidth-hungry application, and a way to enable fluent and intuitive interaction in embodiment systems. The main focus is laid upon inferring the hidden target goals of users while navigating in a remote environment as a basis for possible adaptations. Approach. To reason about possible user goals, a general user-agnostic Bayesian update rule is devised to be recursively applied upon the arrival of evidences, i.e. user input and user gaze. Experiments were conducted with healthy subjects within robotic embodiment settings to evaluate the proposed method. These experiments varied along three factors: the type of the robot/environment (simulated and physical), the type of the interface (keyboard or BCI), and the way goal recognition (GR) is used to guide a simple shared control (SC) driving scheme. Main results. Our results show that the proposed GR algorithm is able to track and infer the hidden user goals with relatively high precision and recall. Further, the realized SC driving scheme benefits from the output of the GR system and is able to reduce the user effort needed to accomplish the assigned tasks. Despite the fact that the BCI requires higher effort compared to the keyboard conditions, most subjects were able to complete the assigned tasks, and the proposed GR system is additionally shown able to handle the uncertainty in user input during SSVEP-based interaction. The SC application of the belief vector indicates that the benefits of the GR module are more pronounced for BCIs, compared to the keyboard interface. Significance. Being based on intuitive heuristics that model the behavior of the general population during the execution of navigation tasks, the proposed GR method can be used without prior tuning for the individual users. The proposed methods can be easily integrated in devising more advanced SC schemes and/or strategies for automatic BCI self-adaptations

    Exploiting code-modulating, Visually-Evoked Potentials for fast and flexible control via Brain-Computer Interfaces

    Get PDF
    Riechmann H. Exploiting code-modulating, Visually-Evoked Potentials for fast and flexible control via Brain-Computer Interfaces. Bielefeld: Universität Bielefeld; 2014

    Exploiting code-modulating, Visually-Evoked Potentials for fast and flexible control via Brain-Computer Interfaces

    Get PDF
    Riechmann H. Exploiting code-modulating, Visually-Evoked Potentials for fast and flexible control via Brain-Computer Interfaces. Bielefeld: Universität Bielefeld; 2014

    Cognition, Affects et Interaction

    No full text
    International audienceCet ouvrage rassemble les travaux d’études et de recherche effectués dans le cadre du cours «Cognition, Affects et Interaction » que nous avons animé au 1er semestre 2015-2016. Cette deuxième édition de cours poursuit le principe inauguré en 2014 : aux cours magistraux donnés sur la thématique "Cognition, Interaction & Affects" qui donnent les outils méthodologiques des composantes de l’interaction socio-communicative, nous avons couplé une introduction à la robotique sociale et un apprentissage actif par travail de recherche en binômes. Le principe de ces travaux d’études et de recherche est d’effectuer une recherche bibliographique et de rédiger un article de synthèse sur un aspect de l’interaction homme-robot. Si plusieurs sujets ont été proposés aux étudiants en début d’année, certains binômes ont choisi d’aborder l’interaction avec un angle original qui reflète souvent les trajectoires de formation variés des étudiants en sciences cognitives (ingénierie, sociologie, psychologie, etc). Le résultat dépasse nos espérances : le lecteur trouvera une compilation d’articles argumentés de manière solide, rédigés de manière claire et présentés avec soin. Ces premières «publications» reflètent les capacités singulières de réflexion de cette promotion en nette augmentation par rapport à l’année précédente. Nous espérons que cette série d’ouvrages disponibles sous HAL puisse servir de point d’entrée à des étudiants ou chercheurs intéressés à explorer ce champ de recherches pluri-disciplinaire

    Enhancing our lives with immersive virtual reality

    Get PDF
    Virtual reality (VR) started about 50 years ago in a form we would recognize today [stereo head-mounted display (HMD), head tracking, computer graphics generated images] – although the hardware was completely different. In the 1980s and 1990s, VR emerged again based on a different generation of hardware (e.g., CRT displays rather than vector refresh, electromagnetic tracking instead of mechanical). This reached the attention of the public, and VR was hailed by many engineers, scientists, celebrities, and business people as the beginning of a new era, when VR would soon change the world for the better. Then, VR disappeared from public view and was rumored to be “dead.” In the intervening 25 years a huge amount of research has nevertheless been carried out across a vast range of applications – from medicine to business, from psychotherapy to industry, from sports to travel. Scientists, engineers, and people working in industry carried on with their research and applications using and exploring different forms of VR, not knowing that actually the topic had already passed away. The purpose of this article is to survey a range of VR applications where there is some evidence for, or at least debate about, its utility, mainly based on publications in peer-reviewed journals. Of course not every type of application has been covered, nor every scientific paper (about 186,000 papers in Google Scholar): in particular, in this review we have not covered applications in psychological or medical rehabilitation. The objective is that the reader becomes aware of what has been accomplished in VR, where the evidence is weaker or stronger, and what can be done. We start in Section 1 with an outline of what VR is and the major conceptual framework used to understand what happens when people experience it – the concept of “presence.” In Section 2, we review some areas where VR has been used in science – mostly psychology and neuroscience, the area of scientific visualization, and some remarks about its use in education and surgical training. In Section 3, we discuss how VR has been used in sports and exercise. In Section 4, we survey applications in social psychology and related areas – how VR has been used to throw light on some social phenomena, and how it can be used to tackle experimentally areas that cannot be studied experimentally in real life. We conclude with how it has been used in the preservation of and access to cultural heritage. In Section 5, we present the domain of moral behavior, including an example of how it might be used to train professionals such as medical doctors when confronting serious dilemmas with patients. In Section 6, we consider how VR has been and might be used in various aspects of travel, collaboration, and industry. In Section 7, we consider mainly the use of VR in news presentation and also discuss different types of VR. In the concluding Section 8, we briefly consider new ideas that have recently emerged – an impossible task since during the short time we have written this page even newer ideas have emerged! And, we conclude with some general considerations and speculations. Throughout and wherever possible we have stressed novel applications and approaches and how the real power of VR is not necessarily to produce a faithful reproduction of “reality” but rather that it offers the possibility to step outside of the normal bounds of reality and realize goals in a totally new and unexpected way. We hope that our article will provoke readers to think as paradigm changers, and advance VR to realize different worlds that might have a positive impact on the lives of millions of people worldwide, and maybe even help a little in saving the planet
    corecore