3,582 research outputs found

    Actual and Imagined Movement in BCI Gaming

    Get PDF
    Most research on Brain-Computer Interfaces (BCI) focuses\ud on developing ways of expression for disabled people who are\ud not able to communicate through other means. Recently it has been\ud shown that BCI can also be used in games to give users a richer experience\ud and new ways to interact with a computer or game console.\ud This paper describes research conducted to find out what the differences\ud are between using actual and imagined movement as modalities\ud in a BCI game. Results show that there are significant differences\ud in user experience and that actual movement is a more robust way of\ud communicating through a BCI

    Study and experimentation of cognitive decline measurements in a virtual reality environment

    Full text link
    À l’heure où le numérique s’est totalement imposé dans notre quotidien, nous pouvons nous demander comment évolue notre bien-être. La réalité virtuelle hautement immersive permet de développer des environnements propices à la relaxation qui peuvent améliorer les capacités cognitives et la qualité de vie de nombreuses personnes. Le premier objectif de cette étude est de réduire les émotions négatives et améliorer les capacités cognitives des personnes souffrant de déclin cognitif subjectif (DCS). À cette fin, nous avons développé un environnement de réalité virtuelle appelé Savannah VR, où les participants ont suivi un avatar à travers une savane. Nous avons recruté dix-neuf personnes atteintes de DCS pour participer à l’expérience virtuelle de la savane. Le casque Emotiv Epoc a capturé les émotions des participants pendant toute l’expérience virtuelle. Les résultats montrent que l’immersion dans la savane virtuelle a réduit les émotions négatives des participants et que les effets positifs ont continué par la suite. Les participants ont également amélioré leur performance cognitive. La confusion se manifeste souvent au cours de l’apprentissage lorsque les élèves ne comprennent pas de nouvelles connaissances. C’est un état qui est également très présent chez les personnes atteintes de démence à cause du déclin de leurs capacités cognitives. Détecter et surmonter la confusion pourrait ainsi améliorer le bien-être et les performances cognitives des personnes atteintes de troubles cognitifs. Le deuxième objectif de ce mémoire est donc de développer un outil pour détecter la confusion. Nous avons mené deux expérimentations et obtenu un modèle d’apprentissage automatique basé sur les signaux du cerveau pour reconnaître quatre niveaux de confusion (90% de précision). De plus, nous avons créé un autre modèle pour reconnaître la fonction cognitive liée à la confusion (82 % de précision).At a time when digital technology has become an integral part of our daily lives, we can ask ourselves how our well-being is evolving. Highly immersive virtual reality allows the development of environments that promote relaxation and can improve the cognitive abilities and quality of life of many people. The first aim of this study is to reduce the negative emotions and improve the cognitive abilities of people suffering from subjective cognitive decline (SCD). To this end, we have developed a virtual reality environment called Savannah VR, where participants followed an avatar across a savannah. We recruited nineteen people with SCD to participate in the virtual savannah experience. The Emotiv Epoc headset captured their emotions for the entire virtual experience. The results show that immersion in the virtual savannah reduced the negative emotions of the participants and that the positive effects continued afterward. Participants also improved their cognitive performance. Confusion often occurs during learning when students do not understand new knowledge. It is a state that is also very present in people with dementia because of the decline in their cognitive abilities. Detecting and overcoming confusion could thus improve the well-being and cognitive performance of people with cognitive impairment. The second objective of this paper is, therefore, to develop a tool to detect confusion. We conducted two experiments and obtained a machine learning model based on brain signals to recognize four levels of confusion (90% accuracy). In addition, we created another model to recognize the cognitive function related to the confusion (82% accuracy)

    Incorporating Cognitive Neuroscience Techniques to Enhance User Experience Research Practices

    Get PDF
    User Experience (UX) involves every interaction that customers have with products, and it plays a crucial role in determining the success of a product in the market. While there are numerous methods available in literature for assessing UX, they often overlook the emotional aspect of the user\u27s experience. As a result, cognitive neuroscience methods are gaining popularity, but they have certain limitations such as difficulty in collecting neurophysiological data, potential for errors, and lengthy procedures. This article aims to examine the most effective research practices using cognitive neuroscience techniques and develop a standardized procedure for conducting UX research. To achieve this objective, the study conducts a comprehensive review of UX research that employs cognitive neuroscience methods published between 2017 and 2022

    Framework of controlling 3d virtual human emotional walking using BCI

    Get PDF
    A Brain-Computer Interface (BCI) is the device that can read and acquire the brain activities. A human body is controlled by Brain-Signals, which considered as a main controller. Furthermore, the human emotions and thoughts will be translated by brain through brain signals and expressed as human mood. This controlling process mainly performed through brain signals, the brain signals is a key component in electroencephalogram (EEG). Based on signal processing the features representing human mood (behavior) could be extracted with emotion as a major feature. This paper proposes a new framework in order to recognize the human inner emotions that have been conducted on the basis of EEG signals using a BCI device controller. This framework go through five steps starting by classifying the brain signal after reading it in order to obtain the emotion, then map the emotion, synchronize the animation of the 3D virtual human, test and evaluate the work. Based on our best knowledge there is no framework for controlling the 3D virtual human. As a result for implementing our framework will enhance the game field of enhancing and controlling the 3D virtual humans’ emotion walking in order to enhance and bring more realistic as well. Commercial games and Augmented Reality systems are possible beneficiaries of this technique. © 2015 Penerbit UTM Press. All rights reserved

    Recent and upcoming BCI progress: overview, analysis, and recommendations

    Get PDF
    Brain–computer interfaces (BCIs) are finally moving out of the laboratory and beginning to gain acceptance in real-world situations. As BCIs gain attention with broader groups of users, including persons with different disabilities and healthy users, numerous practical questions gain importance. What are the most practical ways to detect and analyze brain activity in field settings? Which devices and applications are most useful for different people? How can we make BCIs more natural and sensitive, and how can BCI technologies improve usability? What are some general trends and issues, such as combining different BCIs or assessing and comparing performance? This book chapter provides an overview of the different sections of this book, providing a summary of how authors address these and other questions. We also present some predictions and recommendations that ensue from our experience from discussing these and other issues with our authors and other researchers and developers within the BCI community. We conclude that, although some directions are hard to predict, the field is definitely growing and changing rapidly, and will continue doing so in the next several years

    PhysioVR: a novel mobile virtual reality framework for physiological computing

    Get PDF
    Virtual Reality (VR) is morphing into a ubiquitous technology by leveraging of smartphones and screenless cases in order to provide highly immersive experiences at a low price point. The result of this shift in paradigm is now known as mobile VR (mVR). Although mVR offers numerous advantages over conventional immersive VR methods, one of the biggest limitations is related with the interaction pathways available for the mVR experiences. Using physiological computing principles, we created the PhysioVR framework, an Open-Source software tool developed to facilitate the integration of physiological signals measured through wearable devices in mVR applications. PhysioVR includes heart rate (HR) signals from Android wearables, electroencephalography (EEG) signals from a low cost brain computer interface and electromyography (EMG) signals from a wireless armband. The physiological sensors are connected with a smartphone via Bluetooth and the PhysioVR facilitates the streaming of the data using UDP communication protocol, thus allowing a multicast transmission for a third party application such as the Unity3D game engine. Furthermore, the framework provides a bidirectional communication with the VR content allowing an external event triggering using a real-time control as well as data recording options. We developed a demo game project called EmoCat Rescue which encourage players to modulate HR levels in order to successfully complete the in-game mission. EmoCat Rescue is included in the PhysioVR project which can be freely downloaded. This framework simplifies the acquisition, streaming and recording of multiple physiological signals and parameters from wearable consumer devices providing a single and efficient interface to create novel physiologically-responsive mVR applications.info:eu-repo/semantics/publishedVersio

    Ubiquitous Integration and Temporal Synchronisation (UbilTS) framework : a solution for building complex multimodal data capture and interactive systems

    Get PDF
    Contemporary Data Capture and Interactive Systems (DCIS) systems are tied in with various technical complexities such as multimodal data types, diverse hardware and software components, time synchronisation issues and distributed deployment configurations. Building these systems is inherently difficult and requires addressing of these complexities before the intended and purposeful functionalities can be attained. The technical issues are often common and similar among diverse applications. This thesis presents the Ubiquitous Integration and Temporal Synchronisation (UbiITS) framework, a generic solution to address the technical complexities in building DCISs. The proposed solution is an abstract software framework that can be extended and customised to any application requirements. UbiITS includes all fundamental software components, techniques, system level layer abstractions and reference architecture as a collection to enable the systematic construction of complex DCISs. This work details four case studies to showcase the versatility and extensibility of UbiITS framework’s functionalities and demonstrate how it was employed to successfully solve a range of technical requirements. In each case UbiITS operated as the core element of each application. Additionally, these case studies are novel systems by themselves in each of their domains. Longstanding technical issues such as flexibly integrating and interoperating multimodal tools, precise time synchronisation, etc., were resolved in each application by employing UbiITS. The framework enabled establishing a functional system infrastructure in these cases, essentially opening up new lines of research in each discipline where these research approaches would not have been possible without the infrastructure provided by the framework. The thesis further presents a sample implementation of the framework on a device firmware exhibiting its capability to be directly implemented on a hardware platform. Summary metrics are also produced to establish the complexity, reusability, extendibility, implementation and maintainability characteristics of the framework.Engineering and Physical Sciences Research Council (EPSRC) grants - EP/F02553X/1, 114433 and 11394

    Using brain-computer interaction and multimodal virtual-reality for augmenting stroke neurorehabilitation

    Get PDF
    Every year millions of people suffer from stroke resulting to initial paralysis, slow motor recovery and chronic conditions that require continuous reha bilitation and therapy. The increasing socio-economical and psychological impact of stroke makes it necessary to find new approaches to minimize its sequels, as well as novel tools for effective, low cost and personalized reha bilitation. The integration of current ICT approaches and Virtual Reality (VR) training (based on exercise therapies) has shown significant improve ments. Moreover, recent studies have shown that through mental practice and neurofeedback the task performance is improved. To date, detailed in formation on which neurofeedback strategies lead to successful functional recovery is not available while very little is known about how to optimally utilize neurofeedback paradigms in stroke rehabilitation. Based on the cur rent limitations, the target of this project is to investigate and develop a novel upper-limb rehabilitation system with the use of novel ICT technolo gies including Brain-Computer Interfaces (BCI’s), and VR systems. Here, through a set of studies, we illustrate the design of the RehabNet frame work and its focus on integrative motor and cognitive therapy based on VR scenarios. Moreover, we broadened the inclusion criteria for low mobility pa tients, through the development of neurofeedback tools with the utilization of Brain-Computer Interfaces while investigating the effects of a brain-to-VR interaction.Todos os anos, milho˜es de pessoas sofrem de AVC, resultando em paral isia inicial, recupera¸ca˜o motora lenta e condic¸˜oes cr´onicas que requerem re abilita¸ca˜o e terapia cont´ınuas. O impacto socioecon´omico e psicol´ogico do AVC torna premente encontrar novas abordagens para minimizar as seque las decorrentes, bem como desenvolver ferramentas de reabilita¸ca˜o, efetivas, de baixo custo e personalizadas. A integra¸c˜ao das atuais abordagens das Tecnologias da Informa¸ca˜o e da Comunica¸ca˜o (TIC) e treino com Realidade Virtual (RV), com base em terapias por exerc´ıcios, tem mostrado melhorias significativas. Estudos recentes mostram, ainda, que a performance nas tare fas ´e melhorada atrav´es da pra´tica mental e do neurofeedback. At´e a` data, na˜o existem informac¸˜oes detalhadas sobre quais as estrat´egias de neurofeed back que levam a uma recupera¸ca˜o funcional bem-sucedida. De igual modo, pouco se sabe acerca de como utilizar, de forma otimizada, o paradigma de neurofeedback na recupera¸c˜ao de AVC. Face a tal, o objetivo deste projeto ´e investigar e desenvolver um novo sistema de reabilita¸ca˜o de membros supe riores, recorrendo ao uso de novas TIC, incluindo sistemas como a Interface C´erebro-Computador (ICC) e RV. Atrav´es de um conjunto de estudos, ilus tramos o design do framework RehabNet e o seu foco numa terapia motora e cognitiva, integrativa, baseada em cen´arios de RV. Adicionalmente, ampli amos os crit´erios de inclus˜ao para pacientes com baixa mobilidade, atrav´es do desenvolvimento de ferramentas de neurofeedback com a utilizac¸˜ao de ICC, ao mesmo que investigando os efeitos de uma interac¸˜ao c´erebro-para-RV
    corecore