251 research outputs found

    Improving the performance of GIS/spatial analysts though novel applications of the Emotiv EPOC EEG headset

    Get PDF
    Geospatial information systems are used to analyze spatial data to provide decision makers with relevant, up-to-date, information. The processing time required for this information is a critical component to response time. Despite advances in algorithms and processing power, we still have many “human-in-the-loop” factors. Given the limited number of geospatial professionals, analysts using their time effectively is very important. The automation and faster humancomputer interactions of common tasks that will not disrupt their workflow or attention is something that is very desirable. The following research describes a novel approach to increase productivity with a wireless, wearable, electroencephalograph (EEG) headset within the geospatial workflow

    Error related negativity in observing interactive tasks

    Get PDF
    Error Related Negativity is triggered when a user either makes a mistake or the application behaves differently from their expectation. It can also appear while observing another user making a mistake. This paper investigates ERN in collaborative settings where observing another user (the executer) perform a task is typical and then explores its applicability to HCI. We first show that ERN can be detected on signals captured by commodity EEG headsets like an Emotiv headset when observing another person perform a typical multiple-choice reaction time task. We then investigate the anticipation effects by detecting ERN in the time interval when an executer is reaching towards an answer. We show that we can detect this signal with both a clinical EEG device and with an Emotiv headset. Our results show that online single trial detection is possible using both headsets during tasks that are typical of collaborative interactive applications. However there is a trade-off between the detection speed and the quality/prices of the headsets. Based on the results, we discuss and present several HCI scenarios for use of ERN in observing tasks and collaborative settings

    Towards Reliable Brain-Computer Interface: Achieving Perfect Accuracy by Sacrificing Time

    Get PDF
    Aju-arvuti liides (AAL) on süsteem aju elektriliste impulside väljavõtmiseks janende kasutamiseks arvuti tarkvara juhtimiseks. AAL opereerimiseks peab kasutaja kontsentreeruma mingile mõttelisele ülesandele. Lisaks impulside mõõtmisele muudab AAL elekroonilisi signaale digitaalseks ja selle järgi tuvastab vastava arvuti käsu. Kahjuks on õige käsu tuvastamise tõenäosus alati alla 100%, mistõttu AAL süsteemide tõhusus on võrdlemisi madal.Madal tõhusus on AAL-i jaoks suureks probleemiks, sest senikaua kuni needsüsteemid pakuvad madalaid tuvastamise täpsuseid, jäävad need paljudes valdkondades ilma kasutamiseta. Antud probleemi lahendamiseks enamasti üritatakse tõsta AAL-i täpsust ühe kontsentreerumiskatse raames ja ei pöörata tähelepanu kontsentreerumiskatse kestvusele. Meie lähenemine aga põhineb arusaamisel, kui palju kontsentreerumiskatseid on vaja kasutajal järjest teostada (s.t kui kaua aega on nõutud), et saavutada 99% täpsus.Selles töös kirjeldatud lahendus põhineb Condorcet kohtu teoreemil [1]. Teoreem väidab, et kui on olemas kaks valikuvõimalust ja tõenäosus valida õiget on suurem kui 50%, kui me teostame mitu valimiskatset järjest, siis tõenäosus, et valitakse õiget valikut tõuseb iga järgneva valimiskatsega. Antud töös rakendasime põhilist Condorcet printsiipi aju-arvuti liidesele. Kõigepealt me arendame süsteemi, mis on suuteline saavutama ühe mõttelise ülesande kontsentreerumiskatse täpsuseks rohkem kui 50% ja seejärel proovime läbi mitme kontsentreerumiskatse parandada keskmist täpsust. Me eeldame, et kui kasutada piisavat kogust kontsentreerumiskatseid, siis me jõuame 99% klassifitseerimistäpsuseni. Me võrdleme teoreetilisi tulemusi eksperimentaalsetega ning arutleme nende üle. AAL tehnoloogia on võrdlemisi uus valdkond. Selle tehnoloogia täielik toomine meie igapäevaellu nõuab tugevat panust teadlastelt ja inseneridelt, et muuta AAL usaldusväärseks süsteemiks. Antud töö eesmärk on panustada AAL süsteemi kindlusesse.Brain-computer interface (BCI) is a computer system for extracting brain electricneural signals and using them to control computer applications. For the operationBCI requires a user to concentrate on some mental tasks. Besides measuringthe signals, BCI converts raw electric signal to digital representation and maps thedata to computer commands. Unfortunately, the probability of predicting the rightcommand is below 100% and therefore the reliability of these systems is relativelylow.Low reliability is a huge problem for BCI, since they will not be widely trustedand used while the prediction accuracy is low. The existing solutions usually tryto improve the prediction accuracy of BCI without focusing too much on the timewhat is required for a single user’s concentration attempt. They apply differentprediction models and signal processing techniques in order to raise the accuracyof prediction. Our solution goes the opposite way – it tries to discover how manyconcentration attempts should be done in a row (i.e how long does it take), toguarantee the prediction accuracy of 99%.The solution described in the thesis is based on Condorcet’s jury theorem [1].It states that if we have two options and the chance to pick correct is larger than50%, then, if we make several attempts in a row, the probability to pick the correctoption by majority vote is rising with the number of attempts. In this work weapply the main Condorcet’s principle in a BCI perspective. First we develop asystem that can reach the single concentration attempt’s prediction accuracy tobe more than 50% and then we use multiple concentration attempts in a row toimprove the overall accuracy. We expect that given enough attempts we can reach99% classification accuracy. We compare the empirical results with the theoreticalestimates and discuss them.The BCI technology is a relatively young field. In order to fully integrate itinto our ordinary life, the contribution from scientists and engineers is required forconverting BCI to a reliable system. The following work contributes to reliabilityof BCI systems

    Past, Present, and Future of EEG-Based BCI Applications

    Get PDF
    An electroencephalography (EEG)-based brain–computer interface (BCI) is a system that provides a pathway between the brain and external devices by interpreting EEG. EEG-based BCI applications have initially been developed for medical purposes, with the aim of facilitating the return of patients to normal life. In addition to the initial aim, EEG-based BCI applications have also gained increasing significance in the non-medical domain, improving the life of healthy people, for instance, by making it more efficient, collaborative and helping develop themselves. The objective of this review is to give a systematic overview of the literature on EEG-based BCI applications from the period of 2009 until 2019. The systematic literature review has been prepared based on three databases PubMed, Web of Science and Scopus. This review was conducted following the PRISMA model. In this review, 202 publications were selected based on specific eligibility criteria. The distribution of the research between the medical and non-medical domain has been analyzed and further categorized into fields of research within the reviewed domains. In this review, the equipment used for gathering EEG data and signal processing methods have also been reviewed. Additionally, current challenges in the field and possibilities for the future have been analyzed

    Cross-Platform Implementation of an SSVEP-Based BCI for the Control of a 6-DOF Robotic Arm

    Full text link
    [EN] Robotics has been successfully applied in the design of collaborative robots for assistance to people with motor disabilities. However, man-machine interaction is difficult for those who suffer severe motor disabilities. The aim of this study was to test the feasibility of a low-cost robotic arm control system with an EEG-based brain-computer interface (BCI). The BCI system relays on the Steady State Visually Evoked Potentials (SSVEP) paradigm. A cross-platform application was obtained in C++. This C++ platform, together with the open-source software Openvibe was used to control a Staubli robot arm model TX60. Communication between Openvibe and the robot was carried out through the Virtual Reality Peripheral Network (VRPN) protocol. EEG signals were acquired with the 8-channel Enobio amplifier from Neuroelectrics. For the processing of the EEG signals, Common Spatial Pattern (CSP) filters and a Linear Discriminant Analysis classifier (LDA) were used. Five healthy subjects tried the BCI. This work allowed the communication and integration of a well-known BCI development platform such as Openvibe with the specific control software of a robot arm such as Staubli TX60 using the VRPN protocol. It can be concluded from this study that it is possible to control the robotic arm with an SSVEP-based BCI with a reduced number of dry electrodes to facilitate the use of the system.Funding for open access charge: Universitat Politecnica de Valencia.Quiles Cucarella, E.; Dadone, J.; Chio, N.; García Moreno, E. (2022). Cross-Platform Implementation of an SSVEP-Based BCI for the Control of a 6-DOF Robotic Arm. Sensors. 22(13):1-26. https://doi.org/10.3390/s22135000126221

    Low-cost methodologies and devices applied to measure, model and self-regulate emotions for Human-Computer Interaction

    Get PDF
    En aquesta tesi s'exploren les diferents metodologies d'anàlisi de l'experiència UX des d'una visió centrada en usuari. Aquestes metodologies clàssiques i fonamentades només permeten extreure dades cognitives, és a dir les dades que l'usuari és capaç de comunicar de manera conscient. L'objectiu de la tesi és proposar un model basat en l'extracció de dades biomètriques per complementar amb dades emotives (i formals) la informació cognitiva abans esmentada. Aquesta tesi no és només teòrica, ja que juntament amb el model proposat (i la seva evolució) es mostren les diferents proves, validacions i investigacions en què s'han aplicat, sovint en conjunt amb grups de recerca d'altres àrees amb èxit.En esta tesis se exploran las diferentes metodologías de análisis de la experiencia UX desde una visión centrada en usuario. Estas metodologías clásicas y fundamentadas solamente permiten extraer datos cognitivos, es decir los datos que el usuario es capaz de comunicar de manera consciente. El objetivo de la tesis es proponer un modelo basado en la extracción de datos biométricos para complementar con datos emotivos (y formales) la información cognitiva antes mencionada. Esta tesis no es solamente teórica, ya que junto con el modelo propuesto (y su evolución) se muestran las diferentes pruebas, validaciones e investigaciones en la que se han aplicado, a menudo en conjunto con grupos de investigación de otras áreas con éxito.In this thesis, the different methodologies for analyzing the UX experience are explored from a user-centered perspective. These classical and well-founded methodologies only allow the extraction of cognitive data, that is, the data that the user is capable of consciously communicating. The objective of this thesis is to propose a methodology that uses the extraction of biometric data to complement the aforementioned cognitive information with emotional (and formal) data. This thesis is not only theoretical, since the proposed model (and its evolution) is complemented with the different tests, validations and investigations in which they have been applied, often in conjunction with research groups from other areas with success

    Study and experimentation of cognitive decline measurements in a virtual reality environment

    Full text link
    À l’heure où le numérique s’est totalement imposé dans notre quotidien, nous pouvons nous demander comment évolue notre bien-être. La réalité virtuelle hautement immersive permet de développer des environnements propices à la relaxation qui peuvent améliorer les capacités cognitives et la qualité de vie de nombreuses personnes. Le premier objectif de cette étude est de réduire les émotions négatives et améliorer les capacités cognitives des personnes souffrant de déclin cognitif subjectif (DCS). À cette fin, nous avons développé un environnement de réalité virtuelle appelé Savannah VR, où les participants ont suivi un avatar à travers une savane. Nous avons recruté dix-neuf personnes atteintes de DCS pour participer à l’expérience virtuelle de la savane. Le casque Emotiv Epoc a capturé les émotions des participants pendant toute l’expérience virtuelle. Les résultats montrent que l’immersion dans la savane virtuelle a réduit les émotions négatives des participants et que les effets positifs ont continué par la suite. Les participants ont également amélioré leur performance cognitive. La confusion se manifeste souvent au cours de l’apprentissage lorsque les élèves ne comprennent pas de nouvelles connaissances. C’est un état qui est également très présent chez les personnes atteintes de démence à cause du déclin de leurs capacités cognitives. Détecter et surmonter la confusion pourrait ainsi améliorer le bien-être et les performances cognitives des personnes atteintes de troubles cognitifs. Le deuxième objectif de ce mémoire est donc de développer un outil pour détecter la confusion. Nous avons mené deux expérimentations et obtenu un modèle d’apprentissage automatique basé sur les signaux du cerveau pour reconnaître quatre niveaux de confusion (90% de précision). De plus, nous avons créé un autre modèle pour reconnaître la fonction cognitive liée à la confusion (82 % de précision).At a time when digital technology has become an integral part of our daily lives, we can ask ourselves how our well-being is evolving. Highly immersive virtual reality allows the development of environments that promote relaxation and can improve the cognitive abilities and quality of life of many people. The first aim of this study is to reduce the negative emotions and improve the cognitive abilities of people suffering from subjective cognitive decline (SCD). To this end, we have developed a virtual reality environment called Savannah VR, where participants followed an avatar across a savannah. We recruited nineteen people with SCD to participate in the virtual savannah experience. The Emotiv Epoc headset captured their emotions for the entire virtual experience. The results show that immersion in the virtual savannah reduced the negative emotions of the participants and that the positive effects continued afterward. Participants also improved their cognitive performance. Confusion often occurs during learning when students do not understand new knowledge. It is a state that is also very present in people with dementia because of the decline in their cognitive abilities. Detecting and overcoming confusion could thus improve the well-being and cognitive performance of people with cognitive impairment. The second objective of this paper is, therefore, to develop a tool to detect confusion. We conducted two experiments and obtained a machine learning model based on brain signals to recognize four levels of confusion (90% accuracy). In addition, we created another model to recognize the cognitive function related to the confusion (82% accuracy)

    Co-adaptive control strategies in assistive Brain-Machine Interfaces

    Get PDF
    A large number of people with severe motor disabilities cannot access any of the available control inputs of current assistive products, which typically rely on residual motor functions. These patients are therefore unable to fully benefit from existent assistive technologies, including communication interfaces and assistive robotics. In this context, electroencephalography-based Brain-Machine Interfaces (BMIs) offer a potential non-invasive solution to exploit a non-muscular channel for communication and control of assistive robotic devices, such as a wheelchair, a telepresence robot, or a neuroprosthesis. Still, non-invasive BMIs currently suffer from limitations, such as lack of precision, robustness and comfort, which prevent their practical implementation in assistive technologies. The goal of this PhD research is to produce scientific and technical developments to advance the state of the art of assistive interfaces and service robotics based on BMI paradigms. Two main research paths to the design of effective control strategies were considered in this project. The first one is the design of hybrid systems, based on the combination of the BMI together with gaze control, which is a long-lasting motor function in many paralyzed patients. Such approach allows to increase the degrees of freedom available for the control. The second approach consists in the inclusion of adaptive techniques into the BMI design. This allows to transform robotic tools and devices into active assistants able to co-evolve with the user, and learn new rules of behavior to solve tasks, rather than passively executing external commands. Following these strategies, the contributions of this work can be categorized based on the typology of mental signal exploited for the control. These include: 1) the use of active signals for the development and implementation of hybrid eyetracking and BMI control policies, for both communication and control of robotic systems; 2) the exploitation of passive mental processes to increase the adaptability of an autonomous controller to the user\u2019s intention and psychophysiological state, in a reinforcement learning framework; 3) the integration of brain active and passive control signals, to achieve adaptation within the BMI architecture at the level of feature extraction and classification
    corecore