40 research outputs found

    Exploiting code-modulating, Visually-Evoked Potentials for fast and flexible control via Brain-Computer Interfaces

    Get PDF
    Riechmann H. Exploiting code-modulating, Visually-Evoked Potentials for fast and flexible control via Brain-Computer Interfaces. Bielefeld: Universität Bielefeld; 2014

    Exploiting code-modulating, Visually-Evoked Potentials for fast and flexible control via Brain-Computer Interfaces

    Get PDF
    Riechmann H. Exploiting code-modulating, Visually-Evoked Potentials for fast and flexible control via Brain-Computer Interfaces. Bielefeld: Universität Bielefeld; 2014

    Robotic Embodiment Developing a System for and Applications with Full Body Ownership of a Humanoid Robot

    Get PDF
    [eng] It has been shown that with appropriate multisensory stimulation an illusion of owning an artificial object as part of their own body can be induced in people. Such body ownership illusions have been shown to occur with artificial limbs, such as rubber hands, and even entire artificial or virtual bodies. Although extensive research has been carried out regarding full body ownership illusions with mannequins and virtual bodies, few studies exist that apply this concept to humanoid robots. On the other hand, extensive research has been carried out with robots in terms of telepresence and remote manipulation of the robot, known as teleoperation. Combining these concepts would give rise to a highly immersive, embodied experience in a humanoid robot located at a remote physical location, which holds great potential in terms of real-world applications. In this thesis, we aim to apply this phenomenon of full body ownership illusions in the context of humanoid robots, and to develop real-world applications where this technology could be beneficial. More specifically, by relying on knowledge gained from previous studies regarding body ownership illusions, we investigated whether it is possible to elicit this illusion with a humanoid robot. In addition, we developed a system in the context of telepresence robots, where the participant is embodied in a humanoid robot that is present in a different physical location, and can use this robotic body to interact with the remote environment. To test the functionality of the system and to gain an understanding of body ownership illusions with robots, we carried out two experimental studies and one case-study of a demonstration of the system as a real-world application. In the Brain-Computer Interface versus Eye Tracker study, we used our system to investigate whether it was possible to induce a full body ownership illusion over a humanoid robot with a highly ‘robotic’ appearance. In addition, we compared two different abstract methods of control, a Steady-State Visually Evoked Potential (SSVEP) based Brain-Computer Interface and eye-tracking, in an immersive environment to drive the robot. This was done mainly as a motivation for developing a prototype of a system that could be used by disabled patients. Our results showed that a feeling of body ownership illusion and agency can be induced, even though the postures between participants and the embodied robot were incongruent (the participant was sitting, while the robot was standing). Additionally, both BCI and eye tracking were reported to be suitable methods of control, although the degree of body ownership illusion was influenced by the control method, with higher scores of ownership reported for the BCI condition. In the Tele-Immersive Journalism case study, we used the same system as above, but with the added capability of letting the participant control the robot body by moving their own body. Since in this case we provided synchronous visuomotor correlations with the robotic body we expected this to result in an even higher level of body ownership illusion. By making the robot body the source of their associated sensations we simulate a type of virtual teleportation. We applied this system successfully to the context of journalism, where a journalist could be embodied in a humanoid robot located in a remote destination and carry out interviews through their robotic body. We provide a case-study where the system was used by several journalists to report news about the system itself as well as for reporting other stories. In the Multi-Destination Beaming study, we extended the functionality of the system to include three destinations. The aim of the study was to investigate whether participants could cope with being in three places at same time, and embodied in three different surrogate bodies. We had two physical destinations with one robot in each, and a third virtual destination where the participant would be embodied in a virtual body. The results indicate that the system was physically and psychologically comfortable, and was rated highly by participants in terms of usability in real world. Additionally, high feelings of body ownership illusion and agency were reported, which were not influenced by the robot type. This provides us with clues regarding body ownership illusion with humanoid robots of different dimensions, along with insight about self-localisation and multilocation. Overall, our results show that it is possible to elicit a full body ownership illusion over humanoid robotic bodies. The studies presented here advance the current theoretical framework of body representation, agency and self-perception by providing information about various factors that may affect the illusion of body ownership, such as a highly robotic appearance of the artificial body, having indirect methods of control, or even being simultaneously embodied in three different bodies. Additionally, the setup described can also be used to great effect for highly immersive remote robotic embodiment applications, such as one demonstrated here in the field of journalism.[spa] Se ha demostrado que con la estimulación multisensorial adecuada es posible inducir la ilusión de apropiación de un objeto artificial como parte del propio cuerpo. Tales ilusiones de apropiación corporal han demostrado ser posibles sobre extremidades artificiales, como por ejemplo manos de goma, e incluso cuerpos enteros tanto artificiales como virtuales. Aunque se ha llevado a cabo una amplia investigación acerca de las ilusiones de apropiación corporal con maniquís y cuerpos virtuales, existen pocos estudios que apliquen este concepto a robots humanoides. Por otro lado, se ha llevado a cabo investigación extensa con robots por lo que respecta a la telepresencia y la manipulación remota del robot, también conocida como teleoperación. Combinar estos conceptos da lugar a una experiencia inmersiva de encarnación en un robot humanoide localizado en una posición física remota, cosa que acarrea un gran potencial por lo que respecta a las aplicaciones del mundo real. En esta tesis, pretendemos aplicar el fenómeno de las ilusiones de apropiación corporal al contexto de los robots humanoides, y desarrollar aplicaciones en el mundo real donde esta tecnología pueda ser beneficiosa. Más concretamente, mediante el conocimiento adquirido en los estudios previos relacionados con las ilusiones de apropiación corporal, investigamos si es posible inducir esta ilusión sobre un robot humanoide. Además, desarrollamos un sistema dentro del contexto de robots de telepresencia, donde el participante encarna un robot humanoide que está presente en una localización física diferente a la del participante, y puede usar el cuerpo robótico para interactuar con el entorno remoto. Con el objetivo de probar la funcionalidad del sistema y avanzar en el conocimiento de las ilusiones de encarnación corporal con robots, hemos llevado a cabo dos estudios experimentales y un caso práctico de una demostración del sistema como aplicación en el mundo real. En el estudio Interfaz Cerebro-Ordenador contra Rastreador Ocular, usamos nuestro sistema para investigar si era posible inducir una ilusión de apropiación corporal sobre un robot humanoide con una apariencia altamente `robótica'. Además, comparamos dos métodos abstractos de control diferentes, una interfaz cerebro-computadora (Brain-Computer Interface, BCI) basada en potenciales evocados visuales de estado estable (Steady-State Visually Evoked Potential, SSVEP) y un rastreador ocular, en un entorno inmersivo para dirigir un robot. Este estudio se realizó como motivación para desarrollar un prototipo de un sistema que pudiera ser usado por pacientes discapacitados. Nuestros resultados mostraron que es posible inducir una ilusión de apropiación y agencia corporal, aunque la postura del participante y la del robot sean incongruentes (el participante estaba sentado y el robot de pie). Además, tanto el método BCI como el rastreador ocular se mostraron como métodos válidos de control, aunque el grado de ilusión de apropiación corporal estuviera influenciado por el método de control, siendo la condición con BCI donde se obtuvo un mayor nivel de apropiación corporal. En el caso práctico Periodismo Tele-Inmersivo, usamos el mismo sistema que el descrito anteriormente, pero con la capacidad adicional de permitir al participante controlar el cuerpo del robot mediante el movimiento de su propio cuerpo. Teniendo en cuenta que en este caso añadíamos la correlación síncrona visuomotora con el cuerpo robótico, esperamos que esto conllevara un mayor nivel de ilusión de apropiación corporal. Haciendo que el cuerpo del robot sea el origen de las sensaciones asociadas pudimos simular un tipo de teleportación virtual. Aplicamos este sistema exitosamente al contexto del periodismo, en el cual un periodista podía encarnar un robot humanoide en una destinación remota y llevar a cabo entrevistas a través de su cuerpo robótico. Aportamos un caso práctico donde el sistema fue usado por varios periodistas para informar del mismo sistema entre otras historias. En el estudio Multi-Destino Beaming, ampliamos la funcionalidad del sistema incluyendo tres destinos posibles. El objetivo del estudio era investigar si los participantes podían enfrentarse al hecho de estar en tres lugares simultáneamente, y encarnar tres cuerpos sustitutos. Disponíamos de dos destinos físicos con un robot en cada uno, y un tercer destino virtual donde el participante encarnaba el cuerpo virtual. Los resultados indican que el sistema era cómodo tanto física como psicológicamente, y los participantes lo evaluaron altamente en términos de usabilidad en el mundo real. Asimismo, obtuvimos un nivel alto de ilusión de apropiación corporal y de agencia, sin ninguna influencia del tipo de robot. Esto nos provee información acerca de la ilusión de apropiación corporal con robots humanoides de dimensiones diversas, además de conocimiento sobre la propia localización y la multilocalización. En resumen, nuestros resultados demuestran que es posible inducir una ilusión de apropiación corporal sobre cuerpos robóticos humanoides. Los estudios presentados aquí dan un paso más en el marco teórico actual de la representación corporal, la agencia y la percepción de uno mismo mediante la información adquirida sobre diversos factores que pueden afectar la ilusión de apropiación corporal, tales como la apariencia altamente robótica del cuerpo artificial, métodos indirectos de control, o incluso estar encarnado simultáneamente en tres cuerpos distintos. Además, el equipo descrito también puede ser usado en aplicaciones altamente inmersivas de encarnación robótica remota, tales como la mostrada aquí en el campo del periodismo

    Standardization of Protocol Design for User Training in EEG-based Brain-Computer Interface

    Get PDF
    International audienceBrain-computer interfaces (BCIs) are systems that enable a personto interact with a machine using only neural activity. Such interaction canbe non-intuitive for the user hence training methods are developed to increaseone’s understanding, confidence and motivation, which would in parallel increasesystem performance. To clearly address the current issues in the BCI usertraining protocol design, here it is divided intointroductoryperiod and BCIinteractionperiod. First, theintroductoryperiod (before BCI interaction) mustbe considered as equally important as the BCI interaction for user training. Tosupport this claim, a review of papers show that BCI performance can dependon the methodologies presented in such introductory period. To standardize itsdesign, the literature from human-computer interaction (HCI) is adjusted to theBCI context. Second, during the user-BCI interaction, the interface can takea large spectrum of forms (2D, 3D, size, color etc.) and modalities (visual,auditory or haptic etc.) without following any design standard or guidelines.Namely, studies that explore perceptual affordance on neural activity show thatmotor neurons can be triggered from a simple observation of certain objects, anddepending on objects’ properties (size, location etc.) neural reactions can varygreatly. Surprisingly, the effects of perceptual affordance were not investigatedin the BCI context. Both inconsistent introductions to BCI as well as variableinterface designs make it difficult to reproduce experiments, predict their outcomesand compare results between them. To address these issues, a protocol designstandardization for user training is proposed

    A Multifaceted Approach to Covert Attention Brain-Computer Interfaces

    Get PDF
    Over the last years, brain-computer interfaces (BCIs) have shown their value for assistive technology and neurorehabilitation. Recently, a BCI-approach for the rehabilitation of hemispatial neglect has been proposed on the basis of covert visuospatial attention (CVSA). CVSA is an internal action which can be described as shifting one's attention to the visual periphery without moving the actual point of gaze. Such attention shifts induce a lateralization in parietooccipital blood flow and oscillations in the so-called alpha band (8-14 Hz), which can be detected via electroencephalography (EEG), magnetoencephalography (MEG) or functional magnetic resonance imaging (fMRI). Previous studies have proven the technical feasibility of using CVSA as a control signal for BCIs, but unfortunately, these BCIs could not provide every subject with sufficient control. The aim of this thesis was to investigate the possibility of amplifying the weak lateralization patterns in the alpha band - the main reason behind insufficient CVSA BCI performance. To this end, I have explored three different approaches that could lead to better performing and more inclusive CVSA BCI systems. The first approach illuminated the changes in the behavior and brain patterns by closing the loop between subject and system with continuous real-time feedback at the instructed locus of attention. I could observe that even short (20 minutes) stretches of real-time feedback have an effect on behavioral correlates of attention, even when the changes observed in the EEG remained less conclusive. The second approach attempted to complement the information extracted fromthe EEG signal with another sensing modality that could provide additional information about the state of CVSA. For this reason, I firstly combined functional functional near-infrared spectroscopy (fNIRS) with EEG measurements. The results showed that, while the EEG was able to pick up the expected lateralization in the alpha band, the fNIRS was not able to reliably image changes in blood circulation in the parietooccipital cortex. Secondly, I successfully combined data from the EEG with measures of pupil size changes, induced by a high illumination contrast between the covertly attended target regions, which resulted in an improved BCI decoding performance. The third approach examined the option of using noninvasive electrical brain stimulation to boost the power of the alpha band oscillations and therefore render the lateralization pattern in the alpha band more visible compared to the background activity. However, I could not observe any impact of the stimulation on the ongoing alpha band power, and thus results of the subsequent effect on the lateralization remain inconclusive. Overall, these studies helped to further understand CVSA and lay out a useful basis for further exploration of the connection between behavior and alpha power oscillations in CVSA tasks, as well as for potential directions to improve CVSA-based BCIs

    Effects of Interpretation Error on User Learning in Novel Input Mechanisms

    Get PDF
    Novel input mechanisms generate signals that are interpreted as commands in computer systems. Sometimes noise from various sources can cause the system to produce errors when attempting to interpret the signal, causing a misrepresentation of the user's intention. While research has been done in understanding how these interpretation errors affect the performance of users of novel signal-based input mechanisms, such as a brain-computer interface (BCI), there is a lack of knowledge in how user learning is affected. Previous literature in command-based selection tasks has suggested that errors will have a negative impact on expertise development; however, the presence of errors could conversely improve a user's learning by demanding more attention from the user. This thesis begins by studying people's ability to use a novel input mechanism with a noisy input signal: a motor imagery BCI. By converting a user's brain signals into computer commands, a user could complete selection tasks using imagined movement. However, the high degree of interpretation errors caused by noise in the input signals made it difficult to differentiate the user's intent from the noise. As such, the results of the BCI study served as motivation to test the effects of interpretation errors on user learning. Two studies were conducted to determine how user performance and learning were affected by different rates of interpretation error in a novel input mechanism. The results from these two studies showed that interpretation errors led to slower task completion times, lower accuracy in memory recall, greater rates of user errors, and increased frustration. This new knowledge about the effects of interpretation errors can contribute to better design of input mechanisms and training programs for novel input systems

    Presence 2005: the eighth annual international workshop on presence, 21-23 September, 2005 University College London (Conference proceedings)

    Get PDF
    OVERVIEW (taken from the CALL FOR PAPERS) Academics and practitioners with an interest in the concept of (tele)presence are invited to submit their work for presentation at PRESENCE 2005 at University College London in London, England, September 21-23, 2005. The eighth in a series of highly successful international workshops, PRESENCE 2005 will provide an open discussion forum to share ideas regarding concepts and theories, measurement techniques, technology, and applications related to presence, the psychological state or subjective perception in which a person fails to accurately and completely acknowledge the role of technology in an experience, including the sense of 'being there' experienced by users of advanced media such as virtual reality. The concept of presence in virtual environments has been around for at least 15 years, and the earlier idea of telepresence at least since Minsky's seminal paper in 1980. Recently there has been a burst of funded research activity in this area for the first time with the European FET Presence Research initiative. What do we really know about presence and its determinants? How can presence be successfully delivered with today's technology? This conference invites papers that are based on empirical results from studies of presence and related issues and/or which contribute to the technology for the delivery of presence. Papers that make substantial advances in theoretical understanding of presence are also welcome. The interest is not solely in virtual environments but in mixed reality environments. Submissions will be reviewed more rigorously than in previous conferences. High quality papers are therefore sought which make substantial contributions to the field. Approximately 20 papers will be selected for two successive special issues for the journal Presence: Teleoperators and Virtual Environments. PRESENCE 2005 takes place in London and is hosted by University College London. The conference is organized by ISPR, the International Society for Presence Research and is supported by the European Commission's FET Presence Research Initiative through the Presencia and IST OMNIPRES projects and by University College London

    INNOVATING CONTROL AND EMOTIONAL EXPRESSIVE MODALITIES OF USER INTERFACES FOR PEOPLE WITH LOCKED-IN SYNDROME

    Get PDF
    Patients with Lock-In-Syndrome (LIS) lost their ability to control any body part beside their eyes. Current solutions mainly use eye-tracking cameras to track patients' gaze as system input. However, despite the fact that interface design greatly impacts user experience, only a few guidelines have been were proposed so far to insure an easy, quick, fluid and non-tiresome computer system for these patients. On the other hand, the emergence of dedicated computer software has been greatly increasing the patients' capabilities, but there is still a great need for improvements as existing systems still present low usability and limited capabilities. Most interfaces designed for LIS patients aim at providing internet browsing or communication abilities. State of the art augmentative and alternative communication systems mainly focus on sentences communication without considering the need for emotional expression inextricable from human communication. This thesis aims at exploring new system control and expressive modalities for people with LIS. Firstly, existing gaze-based web-browsing interfaces were investigated. Page analysis and high mental workload appeared as recurring issues with common systems. To address this issue, a novel user interface was designed and evaluated against a commercial system. The results suggested that it is easier to learn and to use, quicker, more satisfying, less frustrating, less tiring and less prone to error. Mental workload was greatly diminished with this system. Other types of system control for LIS patients were then investigated. It was found that galvanic skin response may be used as system input and that stress related bio-feedback helped lowering mental workload during stressful tasks. Improving communication was one of the main goal of this research and in particular emotional communication. A system including a gaze-controlled emotional voice synthesis and a personal emotional avatar was developed with this purpose. Assessment of the proposed system highlighted the enhanced capability to have dialogs more similar to normal ones, to express and to identify emotions. Enabling emotion communication in parallel to sentences was found to help with the conversation. Automatic emotion detection seemed to be the next step toward improving emotional communication. Several studies established that physiological signals relate to emotions. The ability to use physiological signals sensors with LIS patients and their non-invasiveness made them an ideal candidate for this study. One of the main difficulties of emotion detection is the collection of high intensity affect-related data. Studies in this field are currently mostly limited to laboratory investigations, using laboratory-induced emotions, and are rarely adapted for real-life applications. A virtual reality emotion elicitation technique based on appraisal theories was proposed here in order to study physiological signals of high intensity emotions in a real-life-like environment. While this solution successfully elicited positive and negative emotions, it did not elicit the desired emotions for all subject and was therefore, not appropriate for the goals of this research. Collecting emotions in the wild appeared as the best methodology toward emotion detection for real-life applications. The state of the art in the field was therefore reviewed and assessed using a specifically designed method for evaluating datasets collected for emotion recognition in real-life applications. The proposed evaluation method provides guidelines for future researcher in the field. Based on the research findings, a mobile application was developed for physiological and emotional data collection in the wild. Based on appraisal theory, this application provides guidance to users to provide valuable emotion labelling and help them differentiate moods from emotions. A sample dataset collected using this application was compared to one collected using a paper-based preliminary study. The dataset collected using the mobile application was found to provide a more valuable dataset with data consistent with literature. This mobile application was used to create an open-source affect-related physiological signals database. While the path toward emotion detection usable in real-life application is still long, we hope that the tools provided to the research community will represent a step toward achieving this goal in the future. Automatically detecting emotion could not only be used for LIS patients to communicate but also for total-LIS patients who have lost their ability to move their eyes. Indeed, giving the ability to family and caregiver to visualize and therefore understand the patients' emotional state could greatly improve their quality of life. This research provided tools to LIS patients and the scientific community to improve augmentative and alternative communication, technologies with better interfaces, emotion expression capabilities and real-life emotion detection. Emotion recognition methods for real-life applications could not only enhance health care but also robotics, domotics and many other fields of study. A complete system fully gaze-controlled was made available open-source with all the developed solutions for LIS patients. This is expected to enhance their daily lives by improving their communication and by facilitating the development of novel assistive systems capabilities

    Applications of realtime fMRI for non-invasive brain computer interface-decoding and neurofeedback

    Get PDF
    Non-invasive brain-computer interfaces (BCIs) seek to enable or restore brain function by using neuroimaging e.g. functional magnetic resonance imaging (fMRI), to engage brain activations without the need for explicit behavioural output or surgical implants. Brain activations are converted into output signals, for use in communication interfaces, motor prosthetics, or to directly shape brain function via a feedback loop. The aim of this thesis was to develop cognitive BCIs using realtime fMRI (rt-fMRI), with the potential for use as a communication interface, or for initiating neural plasticity to facilitate neurorehabilitation. Rt-fMRI enables brain activation to be manipulated directly to produce changes in function, such as perception. Univariate and multivariate classification approaches were used to decode brain activations produced by the deployment of covert spatial attention to simple visual stimuli. Primary and higher order visual areas were examined, as well as potential control regions. The classification platform was then developed to include the use of real-world visual stimuli, exploiting the use of category-specific visual areas, and demonstrating real-world applicability as a communications interface. Online univariate classification of spatial attention was successfully achieved, with individual classification accuracies for 4-quadrant spatial attention reaching 70%. Further, a novel implementation of m-sequences enabled the use of the timing of stimuli presentation to enhance signal characterisation. An established rt-fMRI analysis loop was then used for neurofeedback-led manipulation of category-specific visual brain regions, modulating their functioning, and, as a result, biasing visual perception during binocular rivalry. These changes were linked with functional and effective connectivity changes in trained regions, as well as in a putative top-down control region. The work presented provides proof-of-principle for non-invasive BCIs using rt-fMRI, with the potential for translation into the clinical environment. Decoding and 4 neurofeedback applied to non-invasive and implantable BCIs form an evolving continuum of options for enabling and restoring brain function
    corecore