86 research outputs found

    Robotic Embodiment Developing a System for and Applications with Full Body Ownership of a Humanoid Robot

    Get PDF
    [eng] It has been shown that with appropriate multisensory stimulation an illusion of owning an artificial object as part of their own body can be induced in people. Such body ownership illusions have been shown to occur with artificial limbs, such as rubber hands, and even entire artificial or virtual bodies. Although extensive research has been carried out regarding full body ownership illusions with mannequins and virtual bodies, few studies exist that apply this concept to humanoid robots. On the other hand, extensive research has been carried out with robots in terms of telepresence and remote manipulation of the robot, known as teleoperation. Combining these concepts would give rise to a highly immersive, embodied experience in a humanoid robot located at a remote physical location, which holds great potential in terms of real-world applications. In this thesis, we aim to apply this phenomenon of full body ownership illusions in the context of humanoid robots, and to develop real-world applications where this technology could be beneficial. More specifically, by relying on knowledge gained from previous studies regarding body ownership illusions, we investigated whether it is possible to elicit this illusion with a humanoid robot. In addition, we developed a system in the context of telepresence robots, where the participant is embodied in a humanoid robot that is present in a different physical location, and can use this robotic body to interact with the remote environment. To test the functionality of the system and to gain an understanding of body ownership illusions with robots, we carried out two experimental studies and one case-study of a demonstration of the system as a real-world application. In the Brain-Computer Interface versus Eye Tracker study, we used our system to investigate whether it was possible to induce a full body ownership illusion over a humanoid robot with a highly ‘robotic’ appearance. In addition, we compared two different abstract methods of control, a Steady-State Visually Evoked Potential (SSVEP) based Brain-Computer Interface and eye-tracking, in an immersive environment to drive the robot. This was done mainly as a motivation for developing a prototype of a system that could be used by disabled patients. Our results showed that a feeling of body ownership illusion and agency can be induced, even though the postures between participants and the embodied robot were incongruent (the participant was sitting, while the robot was standing). Additionally, both BCI and eye tracking were reported to be suitable methods of control, although the degree of body ownership illusion was influenced by the control method, with higher scores of ownership reported for the BCI condition. In the Tele-Immersive Journalism case study, we used the same system as above, but with the added capability of letting the participant control the robot body by moving their own body. Since in this case we provided synchronous visuomotor correlations with the robotic body we expected this to result in an even higher level of body ownership illusion. By making the robot body the source of their associated sensations we simulate a type of virtual teleportation. We applied this system successfully to the context of journalism, where a journalist could be embodied in a humanoid robot located in a remote destination and carry out interviews through their robotic body. We provide a case-study where the system was used by several journalists to report news about the system itself as well as for reporting other stories. In the Multi-Destination Beaming study, we extended the functionality of the system to include three destinations. The aim of the study was to investigate whether participants could cope with being in three places at same time, and embodied in three different surrogate bodies. We had two physical destinations with one robot in each, and a third virtual destination where the participant would be embodied in a virtual body. The results indicate that the system was physically and psychologically comfortable, and was rated highly by participants in terms of usability in real world. Additionally, high feelings of body ownership illusion and agency were reported, which were not influenced by the robot type. This provides us with clues regarding body ownership illusion with humanoid robots of different dimensions, along with insight about self-localisation and multilocation. Overall, our results show that it is possible to elicit a full body ownership illusion over humanoid robotic bodies. The studies presented here advance the current theoretical framework of body representation, agency and self-perception by providing information about various factors that may affect the illusion of body ownership, such as a highly robotic appearance of the artificial body, having indirect methods of control, or even being simultaneously embodied in three different bodies. Additionally, the setup described can also be used to great effect for highly immersive remote robotic embodiment applications, such as one demonstrated here in the field of journalism.[spa] Se ha demostrado que con la estimulación multisensorial adecuada es posible inducir la ilusión de apropiación de un objeto artificial como parte del propio cuerpo. Tales ilusiones de apropiación corporal han demostrado ser posibles sobre extremidades artificiales, como por ejemplo manos de goma, e incluso cuerpos enteros tanto artificiales como virtuales. Aunque se ha llevado a cabo una amplia investigación acerca de las ilusiones de apropiación corporal con maniquís y cuerpos virtuales, existen pocos estudios que apliquen este concepto a robots humanoides. Por otro lado, se ha llevado a cabo investigación extensa con robots por lo que respecta a la telepresencia y la manipulación remota del robot, también conocida como teleoperación. Combinar estos conceptos da lugar a una experiencia inmersiva de encarnación en un robot humanoide localizado en una posición física remota, cosa que acarrea un gran potencial por lo que respecta a las aplicaciones del mundo real. En esta tesis, pretendemos aplicar el fenómeno de las ilusiones de apropiación corporal al contexto de los robots humanoides, y desarrollar aplicaciones en el mundo real donde esta tecnología pueda ser beneficiosa. Más concretamente, mediante el conocimiento adquirido en los estudios previos relacionados con las ilusiones de apropiación corporal, investigamos si es posible inducir esta ilusión sobre un robot humanoide. Además, desarrollamos un sistema dentro del contexto de robots de telepresencia, donde el participante encarna un robot humanoide que está presente en una localización física diferente a la del participante, y puede usar el cuerpo robótico para interactuar con el entorno remoto. Con el objetivo de probar la funcionalidad del sistema y avanzar en el conocimiento de las ilusiones de encarnación corporal con robots, hemos llevado a cabo dos estudios experimentales y un caso práctico de una demostración del sistema como aplicación en el mundo real. En el estudio Interfaz Cerebro-Ordenador contra Rastreador Ocular, usamos nuestro sistema para investigar si era posible inducir una ilusión de apropiación corporal sobre un robot humanoide con una apariencia altamente `robótica'. Además, comparamos dos métodos abstractos de control diferentes, una interfaz cerebro-computadora (Brain-Computer Interface, BCI) basada en potenciales evocados visuales de estado estable (Steady-State Visually Evoked Potential, SSVEP) y un rastreador ocular, en un entorno inmersivo para dirigir un robot. Este estudio se realizó como motivación para desarrollar un prototipo de un sistema que pudiera ser usado por pacientes discapacitados. Nuestros resultados mostraron que es posible inducir una ilusión de apropiación y agencia corporal, aunque la postura del participante y la del robot sean incongruentes (el participante estaba sentado y el robot de pie). Además, tanto el método BCI como el rastreador ocular se mostraron como métodos válidos de control, aunque el grado de ilusión de apropiación corporal estuviera influenciado por el método de control, siendo la condición con BCI donde se obtuvo un mayor nivel de apropiación corporal. En el caso práctico Periodismo Tele-Inmersivo, usamos el mismo sistema que el descrito anteriormente, pero con la capacidad adicional de permitir al participante controlar el cuerpo del robot mediante el movimiento de su propio cuerpo. Teniendo en cuenta que en este caso añadíamos la correlación síncrona visuomotora con el cuerpo robótico, esperamos que esto conllevara un mayor nivel de ilusión de apropiación corporal. Haciendo que el cuerpo del robot sea el origen de las sensaciones asociadas pudimos simular un tipo de teleportación virtual. Aplicamos este sistema exitosamente al contexto del periodismo, en el cual un periodista podía encarnar un robot humanoide en una destinación remota y llevar a cabo entrevistas a través de su cuerpo robótico. Aportamos un caso práctico donde el sistema fue usado por varios periodistas para informar del mismo sistema entre otras historias. En el estudio Multi-Destino Beaming, ampliamos la funcionalidad del sistema incluyendo tres destinos posibles. El objetivo del estudio era investigar si los participantes podían enfrentarse al hecho de estar en tres lugares simultáneamente, y encarnar tres cuerpos sustitutos. Disponíamos de dos destinos físicos con un robot en cada uno, y un tercer destino virtual donde el participante encarnaba el cuerpo virtual. Los resultados indican que el sistema era cómodo tanto física como psicológicamente, y los participantes lo evaluaron altamente en términos de usabilidad en el mundo real. Asimismo, obtuvimos un nivel alto de ilusión de apropiación corporal y de agencia, sin ninguna influencia del tipo de robot. Esto nos provee información acerca de la ilusión de apropiación corporal con robots humanoides de dimensiones diversas, además de conocimiento sobre la propia localización y la multilocalización. En resumen, nuestros resultados demuestran que es posible inducir una ilusión de apropiación corporal sobre cuerpos robóticos humanoides. Los estudios presentados aquí dan un paso más en el marco teórico actual de la representación corporal, la agencia y la percepción de uno mismo mediante la información adquirida sobre diversos factores que pueden afectar la ilusión de apropiación corporal, tales como la apariencia altamente robótica del cuerpo artificial, métodos indirectos de control, o incluso estar encarnado simultáneamente en tres cuerpos distintos. Además, el equipo descrito también puede ser usado en aplicaciones altamente inmersivas de encarnación robótica remota, tales como la mostrada aquí en el campo del periodismo

    Exploiting code-modulating, Visually-Evoked Potentials for fast and flexible control via Brain-Computer Interfaces

    Get PDF
    Riechmann H. Exploiting code-modulating, Visually-Evoked Potentials for fast and flexible control via Brain-Computer Interfaces. Bielefeld: Universität Bielefeld; 2014

    Exploiting code-modulating, Visually-Evoked Potentials for fast and flexible control via Brain-Computer Interfaces

    Get PDF
    Riechmann H. Exploiting code-modulating, Visually-Evoked Potentials for fast and flexible control via Brain-Computer Interfaces. Bielefeld: Universität Bielefeld; 2014

    Cross-Platform Implementation of an SSVEP-Based BCI for the Control of a 6-DOF Robotic Arm

    Full text link
    [EN] Robotics has been successfully applied in the design of collaborative robots for assistance to people with motor disabilities. However, man-machine interaction is difficult for those who suffer severe motor disabilities. The aim of this study was to test the feasibility of a low-cost robotic arm control system with an EEG-based brain-computer interface (BCI). The BCI system relays on the Steady State Visually Evoked Potentials (SSVEP) paradigm. A cross-platform application was obtained in C++. This C++ platform, together with the open-source software Openvibe was used to control a Staubli robot arm model TX60. Communication between Openvibe and the robot was carried out through the Virtual Reality Peripheral Network (VRPN) protocol. EEG signals were acquired with the 8-channel Enobio amplifier from Neuroelectrics. For the processing of the EEG signals, Common Spatial Pattern (CSP) filters and a Linear Discriminant Analysis classifier (LDA) were used. Five healthy subjects tried the BCI. This work allowed the communication and integration of a well-known BCI development platform such as Openvibe with the specific control software of a robot arm such as Staubli TX60 using the VRPN protocol. It can be concluded from this study that it is possible to control the robotic arm with an SSVEP-based BCI with a reduced number of dry electrodes to facilitate the use of the system.Funding for open access charge: Universitat Politecnica de Valencia.Quiles Cucarella, E.; Dadone, J.; Chio, N.; García Moreno, E. (2022). Cross-Platform Implementation of an SSVEP-Based BCI for the Control of a 6-DOF Robotic Arm. Sensors. 22(13):1-26. https://doi.org/10.3390/s22135000126221

    Past, Present, and Future of EEG-Based BCI Applications

    Get PDF
    An electroencephalography (EEG)-based brain–computer interface (BCI) is a system that provides a pathway between the brain and external devices by interpreting EEG. EEG-based BCI applications have initially been developed for medical purposes, with the aim of facilitating the return of patients to normal life. In addition to the initial aim, EEG-based BCI applications have also gained increasing significance in the non-medical domain, improving the life of healthy people, for instance, by making it more efficient, collaborative and helping develop themselves. The objective of this review is to give a systematic overview of the literature on EEG-based BCI applications from the period of 2009 until 2019. The systematic literature review has been prepared based on three databases PubMed, Web of Science and Scopus. This review was conducted following the PRISMA model. In this review, 202 publications were selected based on specific eligibility criteria. The distribution of the research between the medical and non-medical domain has been analyzed and further categorized into fields of research within the reviewed domains. In this review, the equipment used for gathering EEG data and signal processing methods have also been reviewed. Additionally, current challenges in the field and possibilities for the future have been analyzed

    Agency and responsibility over virtual movements controlled through different paradigms of brain−computer interface

    Get PDF
    Agency is the attribution of an action to the self and is a prerequisite for experiencing responsibility over its consequences. Here we investigated agency and responsibility by studying the control of movements of an embodied avatar, via brain computer interface (BCI) technology, in immersive virtual reality. After induction of virtual body ownership by visuomotor correlations, healthy participants performed a motor task with their virtual body. We compared the passive observation of the subject's ‘own’ virtual arm performing the task with (1) the control of the movement through activation of sensorimotor areas (motor imagery) and (2) the control of the movement through activation of visual areas (steady‐state visually evoked potentials). The latter two conditions were carried out using a brain–computer interface (BCI) and both shared the intention and the resulting action. We found that BCI‐control of movements engenders the sense of agency, which is strongest for sensorimotor areas activation. Furthermore, increased activity of sensorimotor areas, as measured using EEG, correlates with levels of agency and responsibility. We discuss the implications of these results for the neural basis of agency

    EEG-based brain-computer interfaces using motor-imagery: techniques and challenges.

    Get PDF
    Electroencephalography (EEG)-based brain-computer interfaces (BCIs), particularly those using motor-imagery (MI) data, have the potential to become groundbreaking technologies in both clinical and entertainment settings. MI data is generated when a subject imagines the movement of a limb. This paper reviews state-of-the-art signal processing techniques for MI EEG-based BCIs, with a particular focus on the feature extraction, feature selection and classification techniques used. It also summarizes the main applications of EEG-based BCIs, particularly those based on MI data, and finally presents a detailed discussion of the most prevalent challenges impeding the development and commercialization of EEG-based BCIs

    Body swarm interface (BOSI) : controlling robotic swarms using human bio-signals

    Get PDF
    Traditionally robots are controlled using devices like joysticks, keyboards, mice and other similar human computer interface (HCI) devices. Although this approach is effective and practical for some cases, it is restrictive only to healthy individuals without disabilities, and it also requires the user to master the device before its usage. It becomes complicated and non-intuitive when multiple robots need to be controlled simultaneously with these traditional devices, as in the case of Human Swarm Interfaces (HSI). This work presents a novel concept of using human bio-signals to control swarms of robots. With this concept there are two major advantages: Firstly, it gives amputees and people with certain disabilities the ability to control robotic swarms, which has previously not been possible. Secondly, it also gives the user a more intuitive interface to control swarms of robots by using gestures, thoughts, and eye movement. We measure different bio-signals from the human body including Electroencephalography (EEG), Electromyography (EMG), Electrooculography (EOG), using off the shelf products. After minimal signal processing, we then decode the intended control action using machine learning techniques like Hidden Markov Models (HMM) and K-Nearest Neighbors (K-NN). We employ formation controllers based on distance and displacement to control the shape and motion of the robotic swarm. Comparison for ground truth for thoughts and gesture classifications are done, and the resulting pipelines are evaluated with both simulations and hardware experiments with swarms of ground robots and aerial vehicles
    corecore