33 research outputs found

    BCI-Based Navigation in Virtual and Real Environments

    Get PDF
    A Brain-Computer Interface (BCI) is a system that enables people to control an external device with their brain activity, without the need of any muscular activity. Researchers in the BCI field aim to develop applications to improve the quality of life of severely disabled patients, for whom a BCI can be a useful channel for interaction with their environment. Some of these systems are intended to control a mobile device (e. g. a wheelchair). Virtual Reality is a powerful tool that can provide the subjects with an opportunity to train and to test different applications in a safe environment. This technical review will focus on systems aimed at navigation, both in virtual and real environments.This work was partially supported by the Innovation, Science and Enterprise Council of the Junta de Andalucía (Spain), project P07-TIC-03310, the Spanish Ministry of Science and Innovation, project TEC 2011-26395 and by the European fund ERDF

    Development of a Practical Visual-Evoked Potential-Based Brain-Computer Interface

    Get PDF
    There are many different neuromuscular disorders that disrupt the normal communication pathways between the brain and the rest of the body. These diseases often leave patients in a `locked-in state, rendering them unable to communicate with their environment despite having cognitively normal brain function. Brain-computer interfaces (BCIs) are augmentative communication devices that establish a direct link between the brain and a computer. Visual evoked potential (VEP)- based BCIs, which are dependent upon the use of salient visual stimuli, are amongst the fastest BCIs available and provide the highest communication rates compared to other BCI modalities. However. the majority of research focuses solely on improving the raw BCI performance; thus, most visual BCIs still suffer from a myriad of practical issues that make them impractical for everyday use. The focus of this dissertation is on the development of novel advancements and solutions that increase the practicality of VEP-based BCIs. The presented work shows the results of several studies that relate to characterizing and optimizing visual stimuli. improving ergonomic design. reducing visual irritation, and implementing a practical VEP-based BCI using an extensible software framework and mobile devices platforms

    Robotic Embodiment Developing a System for and Applications with Full Body Ownership of a Humanoid Robot

    Get PDF
    [eng] It has been shown that with appropriate multisensory stimulation an illusion of owning an artificial object as part of their own body can be induced in people. Such body ownership illusions have been shown to occur with artificial limbs, such as rubber hands, and even entire artificial or virtual bodies. Although extensive research has been carried out regarding full body ownership illusions with mannequins and virtual bodies, few studies exist that apply this concept to humanoid robots. On the other hand, extensive research has been carried out with robots in terms of telepresence and remote manipulation of the robot, known as teleoperation. Combining these concepts would give rise to a highly immersive, embodied experience in a humanoid robot located at a remote physical location, which holds great potential in terms of real-world applications. In this thesis, we aim to apply this phenomenon of full body ownership illusions in the context of humanoid robots, and to develop real-world applications where this technology could be beneficial. More specifically, by relying on knowledge gained from previous studies regarding body ownership illusions, we investigated whether it is possible to elicit this illusion with a humanoid robot. In addition, we developed a system in the context of telepresence robots, where the participant is embodied in a humanoid robot that is present in a different physical location, and can use this robotic body to interact with the remote environment. To test the functionality of the system and to gain an understanding of body ownership illusions with robots, we carried out two experimental studies and one case-study of a demonstration of the system as a real-world application. In the Brain-Computer Interface versus Eye Tracker study, we used our system to investigate whether it was possible to induce a full body ownership illusion over a humanoid robot with a highly ‘robotic’ appearance. In addition, we compared two different abstract methods of control, a Steady-State Visually Evoked Potential (SSVEP) based Brain-Computer Interface and eye-tracking, in an immersive environment to drive the robot. This was done mainly as a motivation for developing a prototype of a system that could be used by disabled patients. Our results showed that a feeling of body ownership illusion and agency can be induced, even though the postures between participants and the embodied robot were incongruent (the participant was sitting, while the robot was standing). Additionally, both BCI and eye tracking were reported to be suitable methods of control, although the degree of body ownership illusion was influenced by the control method, with higher scores of ownership reported for the BCI condition. In the Tele-Immersive Journalism case study, we used the same system as above, but with the added capability of letting the participant control the robot body by moving their own body. Since in this case we provided synchronous visuomotor correlations with the robotic body we expected this to result in an even higher level of body ownership illusion. By making the robot body the source of their associated sensations we simulate a type of virtual teleportation. We applied this system successfully to the context of journalism, where a journalist could be embodied in a humanoid robot located in a remote destination and carry out interviews through their robotic body. We provide a case-study where the system was used by several journalists to report news about the system itself as well as for reporting other stories. In the Multi-Destination Beaming study, we extended the functionality of the system to include three destinations. The aim of the study was to investigate whether participants could cope with being in three places at same time, and embodied in three different surrogate bodies. We had two physical destinations with one robot in each, and a third virtual destination where the participant would be embodied in a virtual body. The results indicate that the system was physically and psychologically comfortable, and was rated highly by participants in terms of usability in real world. Additionally, high feelings of body ownership illusion and agency were reported, which were not influenced by the robot type. This provides us with clues regarding body ownership illusion with humanoid robots of different dimensions, along with insight about self-localisation and multilocation. Overall, our results show that it is possible to elicit a full body ownership illusion over humanoid robotic bodies. The studies presented here advance the current theoretical framework of body representation, agency and self-perception by providing information about various factors that may affect the illusion of body ownership, such as a highly robotic appearance of the artificial body, having indirect methods of control, or even being simultaneously embodied in three different bodies. Additionally, the setup described can also be used to great effect for highly immersive remote robotic embodiment applications, such as one demonstrated here in the field of journalism.[spa] Se ha demostrado que con la estimulación multisensorial adecuada es posible inducir la ilusión de apropiación de un objeto artificial como parte del propio cuerpo. Tales ilusiones de apropiación corporal han demostrado ser posibles sobre extremidades artificiales, como por ejemplo manos de goma, e incluso cuerpos enteros tanto artificiales como virtuales. Aunque se ha llevado a cabo una amplia investigación acerca de las ilusiones de apropiación corporal con maniquís y cuerpos virtuales, existen pocos estudios que apliquen este concepto a robots humanoides. Por otro lado, se ha llevado a cabo investigación extensa con robots por lo que respecta a la telepresencia y la manipulación remota del robot, también conocida como teleoperación. Combinar estos conceptos da lugar a una experiencia inmersiva de encarnación en un robot humanoide localizado en una posición física remota, cosa que acarrea un gran potencial por lo que respecta a las aplicaciones del mundo real. En esta tesis, pretendemos aplicar el fenómeno de las ilusiones de apropiación corporal al contexto de los robots humanoides, y desarrollar aplicaciones en el mundo real donde esta tecnología pueda ser beneficiosa. Más concretamente, mediante el conocimiento adquirido en los estudios previos relacionados con las ilusiones de apropiación corporal, investigamos si es posible inducir esta ilusión sobre un robot humanoide. Además, desarrollamos un sistema dentro del contexto de robots de telepresencia, donde el participante encarna un robot humanoide que está presente en una localización física diferente a la del participante, y puede usar el cuerpo robótico para interactuar con el entorno remoto. Con el objetivo de probar la funcionalidad del sistema y avanzar en el conocimiento de las ilusiones de encarnación corporal con robots, hemos llevado a cabo dos estudios experimentales y un caso práctico de una demostración del sistema como aplicación en el mundo real. En el estudio Interfaz Cerebro-Ordenador contra Rastreador Ocular, usamos nuestro sistema para investigar si era posible inducir una ilusión de apropiación corporal sobre un robot humanoide con una apariencia altamente `robótica'. Además, comparamos dos métodos abstractos de control diferentes, una interfaz cerebro-computadora (Brain-Computer Interface, BCI) basada en potenciales evocados visuales de estado estable (Steady-State Visually Evoked Potential, SSVEP) y un rastreador ocular, en un entorno inmersivo para dirigir un robot. Este estudio se realizó como motivación para desarrollar un prototipo de un sistema que pudiera ser usado por pacientes discapacitados. Nuestros resultados mostraron que es posible inducir una ilusión de apropiación y agencia corporal, aunque la postura del participante y la del robot sean incongruentes (el participante estaba sentado y el robot de pie). Además, tanto el método BCI como el rastreador ocular se mostraron como métodos válidos de control, aunque el grado de ilusión de apropiación corporal estuviera influenciado por el método de control, siendo la condición con BCI donde se obtuvo un mayor nivel de apropiación corporal. En el caso práctico Periodismo Tele-Inmersivo, usamos el mismo sistema que el descrito anteriormente, pero con la capacidad adicional de permitir al participante controlar el cuerpo del robot mediante el movimiento de su propio cuerpo. Teniendo en cuenta que en este caso añadíamos la correlación síncrona visuomotora con el cuerpo robótico, esperamos que esto conllevara un mayor nivel de ilusión de apropiación corporal. Haciendo que el cuerpo del robot sea el origen de las sensaciones asociadas pudimos simular un tipo de teleportación virtual. Aplicamos este sistema exitosamente al contexto del periodismo, en el cual un periodista podía encarnar un robot humanoide en una destinación remota y llevar a cabo entrevistas a través de su cuerpo robótico. Aportamos un caso práctico donde el sistema fue usado por varios periodistas para informar del mismo sistema entre otras historias. En el estudio Multi-Destino Beaming, ampliamos la funcionalidad del sistema incluyendo tres destinos posibles. El objetivo del estudio era investigar si los participantes podían enfrentarse al hecho de estar en tres lugares simultáneamente, y encarnar tres cuerpos sustitutos. Disponíamos de dos destinos físicos con un robot en cada uno, y un tercer destino virtual donde el participante encarnaba el cuerpo virtual. Los resultados indican que el sistema era cómodo tanto física como psicológicamente, y los participantes lo evaluaron altamente en términos de usabilidad en el mundo real. Asimismo, obtuvimos un nivel alto de ilusión de apropiación corporal y de agencia, sin ninguna influencia del tipo de robot. Esto nos provee información acerca de la ilusión de apropiación corporal con robots humanoides de dimensiones diversas, además de conocimiento sobre la propia localización y la multilocalización. En resumen, nuestros resultados demuestran que es posible inducir una ilusión de apropiación corporal sobre cuerpos robóticos humanoides. Los estudios presentados aquí dan un paso más en el marco teórico actual de la representación corporal, la agencia y la percepción de uno mismo mediante la información adquirida sobre diversos factores que pueden afectar la ilusión de apropiación corporal, tales como la apariencia altamente robótica del cuerpo artificial, métodos indirectos de control, o incluso estar encarnado simultáneamente en tres cuerpos distintos. Además, el equipo descrito también puede ser usado en aplicaciones altamente inmersivas de encarnación robótica remota, tales como la mostrada aquí en el campo del periodismo

    A Multi-Modal, Modified-Feedback and Self-Paced Brain-Computer Interface (BCI) to Control an Embodied Avatar's Gait

    Full text link
    Brain-computer interfaces (BCI) have been used to control the gait of a virtual self-avatar with the aim of being used in gait rehabilitation. A BCI decodes the brain signals representing a desire to do something and transforms them into a control command for controlling external devices. The feelings described by the participants when they control a self-avatar in an immersive virtual environment (VE) demonstrate that humans can be embodied in the surrogate body of an avatar (ownership illusion). It has recently been shown that inducing the ownership illusion and then manipulating the movements of one’s self-avatar can lead to compensatory motor control strategies. In order to maximize this effect, there is a need for a method that measures and monitors embodiment levels of participants immersed in virtual reality (VR) to induce and maintain a strong ownership illusion. This is particularly true given that reaching a high level of both BCI performance and embodiment are inter-connected. To reach one of them, the second must be reached as well. Some limitations of many existing systems hinder their adoption for neurorehabilitation: 1- some use motor imagery (MI) of movements other than gait; 2- most systems allow the user to take single steps or to walk but do not allow both, which prevents users from progressing from steps to gait; 3- most of them function in a single BCI mode (cue-paced or self-paced), which prevents users from progressing from machine-dependent to machine-independent walking. Overcoming the aforementioned limitations can be done by combining different control modes and options in one single system. However, this would have a negative impact on BCI performance, therefore diminishing its usefulness as a potential rehabilitation tool. In this case, there will be a need to enhance BCI performance. For such purpose, many techniques have been used in the literature, such as providing modified feedback (whereby the presented feedback is not consistent with the user’s MI), sequential training (recalibrating the classifier as more data becomes available). This thesis was developed over 3 studies. The objective in study 1 was to investigate the possibility of measuring the level of embodiment of an immersive self-avatar, during the performing, observing and imagining of gait, using electroencephalogram (EEG) techniques, by presenting visual feedback that conflicts with the desired movement of embodied participants. The objective of study 2 was to develop and validate a BCI to control single steps and forward walking of an immersive virtual reality (VR) self-avatar, using mental imagery of these actions, in cue-paced and self-paced modes. Different performance enhancement strategies were implemented to increase BCI performance. The data of these two studies were then used in study 3 to construct a generic classifier that could eliminate offline calibration for future users and shorten training time. Twenty different healthy participants took part in studies 1 and 2. In study 1, participants wore an EEG cap and motion capture markers, with an avatar displayed in a head-mounted display (HMD) from a first-person perspective (1PP). They were cued to either perform, watch or imagine a single step forward or to initiate walking on a treadmill. For some of the trials, the avatar took a step with the contralateral limb or stopped walking before the participant stopped (modified feedback). In study 2, participants completed a 4-day sequential training to control the gait of an avatar in both BCI modes. In cue-paced mode, they were cued to imagine a single step forward, using their right or left foot, or to walk forward. In the self-paced mode, they were instructed to reach a target using the MI of multiple steps (switch control mode) or maintaining the MI of forward walking (continuous control mode). The avatar moved as a response to two calibrated regularized linear discriminant analysis (RLDA) classifiers that used the μ power spectral density (PSD) over the foot area of the motor cortex as features. The classifiers were retrained after every session. During the training, and for some of the trials, positive modified feedback was presented to half of the participants, where the avatar moved correctly regardless of the participant’s real performance. In both studies, the participants’ subjective experience was analyzed using a questionnaire. Results of study 1 show that subjective levels of embodiment correlate strongly with the power differences of the event-related synchronization (ERS) within the μ frequency band, and over the motor and pre-motor cortices between the modified and regular feedback trials. Results of study 2 show that all participants were able to operate the cued-paced BCI and the selfpaced BCI in both modes. For the cue-paced BCI, the average offline performance (classification rate) on day 1 was 67±6.1% and 86±6.1% on day 3, showing that the recalibration of the classifiers enhanced the offline performance of the BCI (p < 0.01). The average online performance was 85.9±8.4% for the modified feedback group (77-97%) versus 75% for the non-modified feedback group. For self-paced BCI, the average performance was 83% at switch control and 92% at continuous control mode, with a maximum of 12 seconds of control. Modified feedback enhanced BCI performances (p =0.001). Finally, results of study 3 show that the constructed generic models performed as well as models obtained from participant-specific offline data. The results show that there it is possible to design a participant-independent zero-training BCI.Les interfaces cerveau-ordinateur (ICO) ont été utilisées pour contrôler la marche d'un égo-avatar virtuel dans le but d'être utilisées dans la réadaptation de la marche. Une ICO décode les signaux du cerveau représentant un désir de faire produire un mouvement et les transforme en une commande de contrôle pour contrôler des appareils externes. Les sentiments décrits par les participants lorsqu'ils contrôlent un égo-avatar dans un environnement virtuel immersif démontrent que les humains peuvent être incarnés dans un corps d'un avatar (illusion de propriété). Il a été récemment démontré que provoquer l’illusion de propriété puis manipuler les mouvements de l’égo-avatar peut conduire à des stratégies de contrôle moteur compensatoire. Afin de maximiser cet effet, il existe un besoin d'une méthode qui mesure et surveille les niveaux d’incarnation des participants immergés dans la réalité virtuelle (RV) pour induire et maintenir une forte illusion de propriété. D'autre part, atteindre un niveau élevé de performances (taux de classification) ICO et d’incarnation est interconnecté. Pour atteindre l'un d'eux, le second doit également être atteint. Certaines limitations de plusieurs de ces systèmes entravent leur adoption pour la neuroréhabilitation: 1- certains utilisent l'imagerie motrice (IM) des mouvements autres que la marche; 2- la plupart des systèmes permettent à l'utilisateur de faire des pas simples ou de marcher mais pas les deux, ce qui ne permet pas à un utilisateur de passer des pas à la marche; 3- la plupart fonctionnent en un seul mode d’ICO, rythmé (cue-paced) ou auto-rythmé (self-paced). Surmonter les limitations susmentionnées peut être fait en combinant différents modes et options de commande dans un seul système. Cependant, cela aurait un impact négatif sur les performances de l’ICO, diminuant ainsi son utilité en tant qu'outil potentiel de réhabilitation. Dans ce cas, il sera nécessaire d'améliorer les performances des ICO. À cette fin, de nombreuses techniques ont été utilisées dans la littérature, telles que la rétroaction modifiée, le recalibrage du classificateur et l'utilisation d'un classificateur générique. Le projet de cette thèse a été réalisé en 3 études, avec objectif d'étudier dans l'étude 1, la possibilité de mesurer le niveau d'incarnation d'un égo-avatar immersif, lors de l'exécution, de l'observation et de l'imagination de la marche, à l'aide des techniques encéphalogramme (EEG), en présentant une rétroaction visuelle qui entre en conflit avec la commande du contrôle moteur des sujets incarnés. L'objectif de l'étude 2 était de développer un BCI pour contrôler les pas et la marche vers l’avant d'un égo-avatar dans la réalité virtuelle immersive, en utilisant l'imagerie motrice de ces actions, dans des modes rythmés et auto-rythmés. Différentes stratégies d'amélioration des performances ont été mises en œuvre pour augmenter la performance (taux de classification) de l’ICO. Les données de ces deux études ont ensuite été utilisées dans l'étude 3 pour construire des classificateurs génériques qui pourraient éliminer la calibration hors ligne pour les futurs utilisateurs et raccourcir le temps de formation. Vingt participants sains différents ont participé aux études 1 et 2. Dans l'étude 1, les participants portaient un casque EEG et des marqueurs de capture de mouvement, avec un avatar affiché dans un casque de RV du point de vue de la première personne (1PP). Ils ont été invités à performer, à regarder ou à imaginer un seul pas en avant ou la marche vers l’avant (pour quelques secondes) sur le tapis roulant. Pour certains essais, l'avatar a fait un pas avec le membre controlatéral ou a arrêté de marcher avant que le participant ne s'arrête (rétroaction modifiée). Dans l'étude 2, les participants ont participé à un entrainement séquentiel de 4 jours pour contrôler la marche d'un avatar dans les deux modes de l’ICO. En mode rythmé, ils ont imaginé un seul pas en avant, en utilisant leur pied droit ou gauche, ou la marche vers l’avant . En mode auto-rythmé, il leur a été demandé d'atteindre une cible en utilisant l'imagerie motrice (IM) de plusieurs pas (mode de contrôle intermittent) ou en maintenir l'IM de marche vers l’avant (mode de contrôle continu). L'avatar s'est déplacé en réponse à deux classificateurs ‘Regularized Linear Discriminant Analysis’ (RLDA) calibrés qui utilisaient comme caractéristiques la densité spectrale de puissance (Power Spectral Density; PSD) des bandes de fréquences µ (8-12 Hz) sur la zone du pied du cortex moteur. Les classificateurs ont été recalibrés après chaque session. Au cours de l’entrainement et pour certains des essais, une rétroaction modifiée positive a été présentée à la moitié des participants, où l'avatar s'est déplacé correctement quelle que soit la performance réelle du participant. Dans les deux études, l'expérience subjective des participants a été analysée à l'aide d'un questionnaire. Les résultats de l'étude 1 montrent que les niveaux subjectifs d’incarnation sont fortement corrélés à la différence de la puissance de la synchronisation liée à l’événement (Event-Related Synchronization; ERS) sur la bande de fréquence μ et sur le cortex moteur et prémoteur entre les essais de rétroaction modifiés et réguliers. L'étude 2 a montré que tous les participants étaient capables d’utiliser le BCI rythmé et auto-rythmé dans les deux modes. Pour le BCI rythmé, la performance hors ligne moyenne au jour 1 était de 67±6,1% et 86±6,1% au jour 3, ce qui montre que le recalibrage des classificateurs a amélioré la performance hors ligne du BCI (p <0,01). La performance en ligne moyenne était de 85,9±8,4% pour le groupe de rétroaction modifié (77-97%) contre 75% pour le groupe de rétroaction non modifié. Pour le BCI auto-rythmé, la performance moyenne était de 83% en commande de commutateur et de 92% en mode de commande continue, avec un maximum de 12 secondes de commande. Les performances de l’ICO ont été améliorées par la rétroaction modifiée (p = 0,001). Enfin, les résultats de l'étude 3 montrent que pour la classification des initialisations des pas et de la marche, il a été possible de construire des modèles génériques à partir de données hors ligne spécifiques aux participants. Les résultats montrent la possibilité de concevoir une ICO ne nécessitant aucun entraînement spécifique au participant

    A brain-computer interface for navigation in virtual reality

    Full text link
    L'interface cerveau-ordinateur (ICO) décode les signaux électriques du cerveau requise par l’électroencéphalographie et transforme ces signaux en commande pour contrôler un appareil ou un logiciel. Un nombre limité de tâches mentales ont été détectés et classifier par différents groupes de recherche. D’autres types de contrôle, par exemple l’exécution d'un mouvement du pied, réel ou imaginaire, peut modifier les ondes cérébrales du cortex moteur. Nous avons utilisé un ICO pour déterminer si nous pouvions faire une classification entre la navigation de type marche avant et arrière, en temps réel et en temps différé, en utilisant différentes méthodes. Dix personnes en bonne santé ont participé à l’expérience sur les ICO dans un tunnel virtuel. L’expérience fut a était divisé en deux séances (48 min chaque). Chaque séance comprenait 320 essais. On a demandé au sujets d’imaginer un déplacement avant ou arrière dans le tunnel virtuel de façon aléatoire d’après une commande écrite sur l'écran. Les essais ont été menés avec feedback. Trois électrodes ont été montées sur le scalp, vis-à-vis du cortex moteur. Durant la 1re séance, la classification des deux taches (navigation avant et arrière) a été réalisée par les méthodes de puissance de bande, de représentation temporel-fréquence, des modèles autorégressifs et des rapports d’asymétrie du rythme β avec classificateurs d’analyse discriminante linéaire et SVM. Les seuils ont été calculés en temps différé pour former des signaux de contrôle qui ont été utilisés en temps réel durant la 2e séance afin d’initier, par les ondes cérébrales de l'utilisateur, le déplacement du tunnel virtuel dans le sens demandé. Après 96 min d'entrainement, la méthode « online biofeedback » de la puissance de bande a atteint une précision de classification moyenne de 76 %, et la classification en temps différé avec les rapports d’asymétrie et puissance de bande, a atteint une précision de classification d’environ 80 %.A Brain-Computer Interface (BCI) decodes the brain signals representing a desire to do something, and transforms those signals into a control command. However, only a limited number of mental tasks have been previously detected and classified. Performing a real or imaginary navigation movement can similarly change the brainwaves over the motor cortex. We used an ERS-BCI to see if we can classify between movements in forward and backward direction offline and then online using different methods. Ten healthy people participated in BCI experiments comprised two-sessions (48 min each) in a virtual environment tunnel. Each session consisted of 320 trials where subjects were asked to imagine themselves moving in the tunnel in a forward or backward motion after a randomly presented (forward versus backward) command on the screen. Three EEG electrodes were mounted bilaterally on the scalp over the motor cortex. Trials were conducted with feedback. In session 1, Band Power method, Time-frequency representation, Autoregressive models and asymmetry ratio were used in the β rhythm range with a Linear-Discriminant-analysis classifier and a Support Vector Machine classifier to discriminate between the two mental tasks. Thresholds for both tasks were computed offline and then used to form control signals that were used online in session 2 to trigger the virtual tunnel to move in the direction requested by the user's brain signals. After 96 min of training, the online band-power biofeedback training achieved an average classification precision of 76 %, whereas the offline classification with asymmetrical ratio and band-power achieved an average classification precision of 80%

    Emerging ExG-based NUI Inputs in Extended Realities : A Bottom-up Survey

    Get PDF
    Incremental and quantitative improvements of two-way interactions with extended realities (XR) are contributing toward a qualitative leap into a state of XR ecosystems being efficient, user-friendly, and widely adopted. However, there are multiple barriers on the way toward the omnipresence of XR; among them are the following: computational and power limitations of portable hardware, social acceptance of novel interaction protocols, and usability and efficiency of interfaces. In this article, we overview and analyse novel natural user interfaces based on sensing electrical bio-signals that can be leveraged to tackle the challenges of XR input interactions. Electroencephalography-based brain-machine interfaces that enable thought-only hands-free interaction, myoelectric input methods that track body gestures employing electromyography, and gaze-tracking electrooculography input interfaces are the examples of electrical bio-signal sensing technologies united under a collective concept of ExG. ExG signal acquisition modalities provide a way to interact with computing systems using natural intuitive actions enriching interactions with XR. This survey will provide a bottom-up overview starting from (i) underlying biological aspects and signal acquisition techniques, (ii) ExG hardware solutions, (iii) ExG-enabled applications, (iv) discussion on social acceptance of such applications and technologies, as well as (v) research challenges, application directions, and open problems; evidencing the benefits that ExG-based Natural User Interfaces inputs can introduceto the areaof XR.Peer reviewe

    Emerging ExG-based NUI Inputs in Extended Realities : A Bottom-up Survey

    Get PDF
    Incremental and quantitative improvements of two-way interactions with extended realities (XR) are contributing toward a qualitative leap into a state of XR ecosystems being efficient, user-friendly, and widely adopted. However, there are multiple barriers on the way toward the omnipresence of XR; among them are the following: computational and power limitations of portable hardware, social acceptance of novel interaction protocols, and usability and efficiency of interfaces. In this article, we overview and analyse novel natural user interfaces based on sensing electrical bio-signals that can be leveraged to tackle the challenges of XR input interactions. Electroencephalography-based brain-machine interfaces that enable thought-only hands-free interaction, myoelectric input methods that track body gestures employing electromyography, and gaze-tracking electrooculography input interfaces are the examples of electrical bio-signal sensing technologies united under a collective concept of ExG. ExG signal acquisition modalities provide a way to interact with computing systems using natural intuitive actions enriching interactions with XR. This survey will provide a bottom-up overview starting from (i) underlying biological aspects and signal acquisition techniques, (ii) ExG hardware solutions, (iii) ExG-enabled applications, (iv) discussion on social acceptance of such applications and technologies, as well as (v) research challenges, application directions, and open problems; evidencing the benefits that ExG-based Natural User Interfaces inputs can introduceto the areaof XR.Peer reviewe

    Electroencephalography (EEG)-based Brain-Computer Interfaces

    Get PDF
    International audienceBrain-Computer Interfaces (BCI) are systems that can translate the brain activity patterns of a user into messages or commands for an interactive application. The brain activity which is processed by the BCI systems is usually measured using Electroencephalography (EEG). In this article, we aim at providing an accessible and up-to-date overview of EEG-based BCI, with a main focus on its engineering aspects. We notably introduce some basic neuroscience background, and explain how to design an EEG-based BCI, in particular reviewing which signal processing, machine learning, software and hardware tools to use. We present Brain Computer Interface applications, highlight some limitations of current systems and suggest some perspectives for the field
    corecore