16 research outputs found

    BCI and Eye Gaze: Collaboration at the Interface

    Get PDF

    The BCI as a Pervasive Technology– A Research Plan

    Get PDF

    Accessing Tele-Services using a Hybrid BCI Approach

    Get PDF

    Quantifying Brain Activity for Task Engagement

    Get PDF

    Ureka Potential

    Get PDF

    Interaction Paradigms for Brain-Body Interfaces for Computer Users with Brain Injuries

    Get PDF
    In comparison to all types of injury, those to the brain are among the most likely to result in death or permanent disability. Some of these brain-injured people cannot communicate, recreate, or control their environment due to severe motor impairment. This group of individuals with severe head injury have received limited help from assistive technology. Brain-Computer Interfaces have opened up a spectrum of assistive technologies, which are particularly appropriate for people with traumatic brain injury, especially those who suffer from “locked-in” syndrome. The research challenge here is to develop novel interaction paradigms that suit brain-injured individuals, who could then use it for everyday communications. The developed interaction paradigms should require minimum training, reconfigurable and minimum effort to use. This thesis reports on the development of novel interaction paradigms for Brain-Body Interfaces to help brain-injured people to communicate better, recreate and control their environment using computers despite the severity of their brain injury. The investigation was carried out in three phases. Phase one was an exploratory study where a first novel interaction paradigm was developed and evaluated with able-bodied and disabled participants. Results obtained were fed into the next phase of the investigation. Phase two was carried out with able participants who acted as development group for the second novel interaction paradigm. This second novel interaction paradigm was evaluated with non-verbal participants with severe brain injury in phase three. An iterative design research methodology was chosen to develop the interaction paradigms. A non-invasive assistive technology device named Cyberlink™ was chosen as the Brain-Body Interface. This research improved previous work in this area by developing new interaction paradigms of personalised tiling and discrete acceleration in Brain- Body Interfaces. The research hypothesis of this study ‘that the performance of the Brain-Body Interface can be improved by the use of novel interaction paradigms’ was successfully demonstrated

    BRAIN-COMPUTER MUSIC INTERFACING: DESIGNING PRACTICAL SYSTEMS FOR CREATIVE APPLICATIONS

    Get PDF
    Brain-computer music interfacing (BCMI) presents a novel approach to music making, as it requires only the brainwaves of a user to control musical parameters. This presents immediate benefits for users with motor disabilities that may otherwise prevent them from engaging in traditional musical activities such as composition, performance or collaboration with other musicians. BCMI systems with active control, where a user can make cognitive choices that are detected within brain signals, provide a platform for developing new approaches towards accomplishing these activities. BCMI systems that use passive control present an interesting alternate to active control, where control over music is accomplished by harnessing brainwave patterns that are associated with subconscious mental states. Recent developments in brainwave measuring technologies, in particular electroencephalography (EEG), have made brainwave interaction with computer systems more affordable and accessible and the time is ripe for research into the potential such technologies can offer for creative applications for users of all abilities. This thesis presents an account of BCMI development that investigates methods of active, passive and hybrid (multiple control methods) control that include control over electronic music, acoustic instrumental music, multi-brain systems and combining methods of brainwave control. In practice there are many obstacles associated with detecting useful brainwave signals, in particular when scaling systems otherwise designed for medical studies for use outside of laboratory settings. Two key areas are addressed throughout this thesis. Firstly, improving the accuracy of meaningful brain signal detection in BCMI, and secondly, exploring the creativity available in user control through ways in which brainwaves can be mapped to musical features. Six BCMIs are presented in this thesis, each with the objective of exploring a unique aspect of user control. Four of these systems are designed for live BCMI concert performance, one evaluates a proof-of-concept through end-user testing and one is designed as a musical composition tool. The thesis begins by exploring the field of brainwave detection and control and identifies the steady-state visually evoked potential (SSVEP) method of eliciting brainwave control as a suitable technique for use in BCMI. In an attempt to improve signal accuracy of the SSVEP technique a new modular hardware unit is presented that provides accurate SSVEP stimuli, suitable for live music performance. Experimental data confirms the performance of the unit in tests across three different EEG hardware platforms. Results across 11 users indicate that a mean accuracy of 96% and an average response time of 3.88 seconds are attainable with the system. These results contribute to the development of the BCMI for Activating Memory, a multi-user system. Once a stable SSVEP platform is developed, control is extended through the integration of two more brainwave control techniques: affective (emotional) state detection and motor imagery response. In order to ascertain the suitability of the former an experiment confirms the accuracy of EEG when measuring affective states in response to music in a pilot study. This thesis demonstrates how a range of brainwave detection methods can be used for creative control in musical applications. Video and audio excerpts of BCMI pieces are also included in the Appendices

    Enhancement and optimization of a multi-command-based brain-computer interface

    Get PDF
    Brain-computer interfaces (BCI) assist disabled person to control many appliances without any physically interaction (e.g., pressing a button). SSVEP is brain activities elicited by evoked signals that are observed by visual stimuli paradigm. In this dissertation were addressed the problems which are oblige more usability of BCI-system by optimizing and enhancing the performance using particular design. Main contribution of this work is improving brain reaction response depending on focal approaches

    Interaction paradigms for brain-body interfaces for computer users with brain injuries

    Get PDF
    In comparison to all types of injury, those to the brain are among the most likely to result in death or permanent disability. Some of these brain-injured people cannot communicate, recreate, or control their environment due to severe motor impairment. This group of individuals with severe head injury have received limited help from assistive technology. Brain-Computer Interfaces have opened up a spectrum of assistive technologies, which are particularly appropriate for people with traumatic brain injury, especially those who suffer from “locked-in” syndrome. The research challenge here is to develop novel interaction paradigms that suit brain-injured individuals, who could then use it for everyday communications. The developed interaction paradigms should require minimum training, reconfigurable and minimum effort to use. This thesis reports on the development of novel interaction paradigms for Brain-Body Interfaces to help brain-injured people to communicate better, recreate and control their environment using computers despite the severity of their brain injury. The investigation was carried out in three phases. Phase one was an exploratory study where a first novel interaction paradigm was developed and evaluated with able-bodied and disabled participants. Results obtained were fed into the next phase of the investigation. Phase two was carried out with able participants who acted as development group for the second novel interaction paradigm. This second novel interaction paradigm was evaluated with non-verbal participants with severe brain injury in phase three. An iterative design research methodology was chosen to develop the interaction paradigms. A non-invasive assistive technology device named Cyberlink™ was chosen as the Brain-Body Interface. This research improved previous work in this area by developing new interaction paradigms of personalised tiling and discrete acceleration in Brain- Body Interfaces. The research hypothesis of this study ‘that the performance of the Brain-Body Interface can be improved by the use of novel interaction paradigms’ was successfully demonstrated.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Robotic Embodiment Developing a System for and Applications with Full Body Ownership of a Humanoid Robot

    Get PDF
    [eng] It has been shown that with appropriate multisensory stimulation an illusion of owning an artificial object as part of their own body can be induced in people. Such body ownership illusions have been shown to occur with artificial limbs, such as rubber hands, and even entire artificial or virtual bodies. Although extensive research has been carried out regarding full body ownership illusions with mannequins and virtual bodies, few studies exist that apply this concept to humanoid robots. On the other hand, extensive research has been carried out with robots in terms of telepresence and remote manipulation of the robot, known as teleoperation. Combining these concepts would give rise to a highly immersive, embodied experience in a humanoid robot located at a remote physical location, which holds great potential in terms of real-world applications. In this thesis, we aim to apply this phenomenon of full body ownership illusions in the context of humanoid robots, and to develop real-world applications where this technology could be beneficial. More specifically, by relying on knowledge gained from previous studies regarding body ownership illusions, we investigated whether it is possible to elicit this illusion with a humanoid robot. In addition, we developed a system in the context of telepresence robots, where the participant is embodied in a humanoid robot that is present in a different physical location, and can use this robotic body to interact with the remote environment. To test the functionality of the system and to gain an understanding of body ownership illusions with robots, we carried out two experimental studies and one case-study of a demonstration of the system as a real-world application. In the Brain-Computer Interface versus Eye Tracker study, we used our system to investigate whether it was possible to induce a full body ownership illusion over a humanoid robot with a highly ‘robotic’ appearance. In addition, we compared two different abstract methods of control, a Steady-State Visually Evoked Potential (SSVEP) based Brain-Computer Interface and eye-tracking, in an immersive environment to drive the robot. This was done mainly as a motivation for developing a prototype of a system that could be used by disabled patients. Our results showed that a feeling of body ownership illusion and agency can be induced, even though the postures between participants and the embodied robot were incongruent (the participant was sitting, while the robot was standing). Additionally, both BCI and eye tracking were reported to be suitable methods of control, although the degree of body ownership illusion was influenced by the control method, with higher scores of ownership reported for the BCI condition. In the Tele-Immersive Journalism case study, we used the same system as above, but with the added capability of letting the participant control the robot body by moving their own body. Since in this case we provided synchronous visuomotor correlations with the robotic body we expected this to result in an even higher level of body ownership illusion. By making the robot body the source of their associated sensations we simulate a type of virtual teleportation. We applied this system successfully to the context of journalism, where a journalist could be embodied in a humanoid robot located in a remote destination and carry out interviews through their robotic body. We provide a case-study where the system was used by several journalists to report news about the system itself as well as for reporting other stories. In the Multi-Destination Beaming study, we extended the functionality of the system to include three destinations. The aim of the study was to investigate whether participants could cope with being in three places at same time, and embodied in three different surrogate bodies. We had two physical destinations with one robot in each, and a third virtual destination where the participant would be embodied in a virtual body. The results indicate that the system was physically and psychologically comfortable, and was rated highly by participants in terms of usability in real world. Additionally, high feelings of body ownership illusion and agency were reported, which were not influenced by the robot type. This provides us with clues regarding body ownership illusion with humanoid robots of different dimensions, along with insight about self-localisation and multilocation. Overall, our results show that it is possible to elicit a full body ownership illusion over humanoid robotic bodies. The studies presented here advance the current theoretical framework of body representation, agency and self-perception by providing information about various factors that may affect the illusion of body ownership, such as a highly robotic appearance of the artificial body, having indirect methods of control, or even being simultaneously embodied in three different bodies. Additionally, the setup described can also be used to great effect for highly immersive remote robotic embodiment applications, such as one demonstrated here in the field of journalism.[spa] Se ha demostrado que con la estimulación multisensorial adecuada es posible inducir la ilusión de apropiación de un objeto artificial como parte del propio cuerpo. Tales ilusiones de apropiación corporal han demostrado ser posibles sobre extremidades artificiales, como por ejemplo manos de goma, e incluso cuerpos enteros tanto artificiales como virtuales. Aunque se ha llevado a cabo una amplia investigación acerca de las ilusiones de apropiación corporal con maniquís y cuerpos virtuales, existen pocos estudios que apliquen este concepto a robots humanoides. Por otro lado, se ha llevado a cabo investigación extensa con robots por lo que respecta a la telepresencia y la manipulación remota del robot, también conocida como teleoperación. Combinar estos conceptos da lugar a una experiencia inmersiva de encarnación en un robot humanoide localizado en una posición física remota, cosa que acarrea un gran potencial por lo que respecta a las aplicaciones del mundo real. En esta tesis, pretendemos aplicar el fenómeno de las ilusiones de apropiación corporal al contexto de los robots humanoides, y desarrollar aplicaciones en el mundo real donde esta tecnología pueda ser beneficiosa. Más concretamente, mediante el conocimiento adquirido en los estudios previos relacionados con las ilusiones de apropiación corporal, investigamos si es posible inducir esta ilusión sobre un robot humanoide. Además, desarrollamos un sistema dentro del contexto de robots de telepresencia, donde el participante encarna un robot humanoide que está presente en una localización física diferente a la del participante, y puede usar el cuerpo robótico para interactuar con el entorno remoto. Con el objetivo de probar la funcionalidad del sistema y avanzar en el conocimiento de las ilusiones de encarnación corporal con robots, hemos llevado a cabo dos estudios experimentales y un caso práctico de una demostración del sistema como aplicación en el mundo real. En el estudio Interfaz Cerebro-Ordenador contra Rastreador Ocular, usamos nuestro sistema para investigar si era posible inducir una ilusión de apropiación corporal sobre un robot humanoide con una apariencia altamente `robótica'. Además, comparamos dos métodos abstractos de control diferentes, una interfaz cerebro-computadora (Brain-Computer Interface, BCI) basada en potenciales evocados visuales de estado estable (Steady-State Visually Evoked Potential, SSVEP) y un rastreador ocular, en un entorno inmersivo para dirigir un robot. Este estudio se realizó como motivación para desarrollar un prototipo de un sistema que pudiera ser usado por pacientes discapacitados. Nuestros resultados mostraron que es posible inducir una ilusión de apropiación y agencia corporal, aunque la postura del participante y la del robot sean incongruentes (el participante estaba sentado y el robot de pie). Además, tanto el método BCI como el rastreador ocular se mostraron como métodos válidos de control, aunque el grado de ilusión de apropiación corporal estuviera influenciado por el método de control, siendo la condición con BCI donde se obtuvo un mayor nivel de apropiación corporal. En el caso práctico Periodismo Tele-Inmersivo, usamos el mismo sistema que el descrito anteriormente, pero con la capacidad adicional de permitir al participante controlar el cuerpo del robot mediante el movimiento de su propio cuerpo. Teniendo en cuenta que en este caso añadíamos la correlación síncrona visuomotora con el cuerpo robótico, esperamos que esto conllevara un mayor nivel de ilusión de apropiación corporal. Haciendo que el cuerpo del robot sea el origen de las sensaciones asociadas pudimos simular un tipo de teleportación virtual. Aplicamos este sistema exitosamente al contexto del periodismo, en el cual un periodista podía encarnar un robot humanoide en una destinación remota y llevar a cabo entrevistas a través de su cuerpo robótico. Aportamos un caso práctico donde el sistema fue usado por varios periodistas para informar del mismo sistema entre otras historias. En el estudio Multi-Destino Beaming, ampliamos la funcionalidad del sistema incluyendo tres destinos posibles. El objetivo del estudio era investigar si los participantes podían enfrentarse al hecho de estar en tres lugares simultáneamente, y encarnar tres cuerpos sustitutos. Disponíamos de dos destinos físicos con un robot en cada uno, y un tercer destino virtual donde el participante encarnaba el cuerpo virtual. Los resultados indican que el sistema era cómodo tanto física como psicológicamente, y los participantes lo evaluaron altamente en términos de usabilidad en el mundo real. Asimismo, obtuvimos un nivel alto de ilusión de apropiación corporal y de agencia, sin ninguna influencia del tipo de robot. Esto nos provee información acerca de la ilusión de apropiación corporal con robots humanoides de dimensiones diversas, además de conocimiento sobre la propia localización y la multilocalización. En resumen, nuestros resultados demuestran que es posible inducir una ilusión de apropiación corporal sobre cuerpos robóticos humanoides. Los estudios presentados aquí dan un paso más en el marco teórico actual de la representación corporal, la agencia y la percepción de uno mismo mediante la información adquirida sobre diversos factores que pueden afectar la ilusión de apropiación corporal, tales como la apariencia altamente robótica del cuerpo artificial, métodos indirectos de control, o incluso estar encarnado simultáneamente en tres cuerpos distintos. Además, el equipo descrito también puede ser usado en aplicaciones altamente inmersivas de encarnación robótica remota, tales como la mostrada aquí en el campo del periodismo
    corecore