503 research outputs found

    Development of a practical and mobile brain-computer communication device for profoundly paralyzed individuals

    Full text link
    Thesis (Ph.D.)--Boston UniversityBrain-computer interface (BCI) technology has seen tremendous growth over the past several decades, with numerous groundbreaking research studies demonstrating technical viability (Sellers et al., 2010; Silvoni et al., 2011). Despite this progress, BCIs have remained primarily in controlled laboratory settings. This dissertation proffers a blueprint for translating research-grade BCI systems into real-world applications that are noninvasive and fully portable, and that employ intelligent user interfaces for communication. The proposed architecture is designed to be used by severely motor-impaired individuals, such as those with locked-in syndrome, while reducing the effort and cognitive load needed to communicate. Such a system requires the merging of two primary research fields: 1) electroencephalography (EEG)-based BCIs and 2) intelligent user interface design. The EEG-based BCI portion of this dissertation provides a history of the field, details of our software and hardware implementation, and results from an experimental study aimed at verifying the utility of a BCI based on the steady-state visual evoked potential (SSVEP), a robust brain response to visual stimulation at controlled frequencies. The visual stimulation, feature extraction, and classification algorithms for the BCI were specially designed to achieve successful real-time performance on a laptop computer. Also, the BCI was developed in Python, an open-source programming language that combines programming ease with effective handling of hardware and software requirements. The result of this work was The Unlock Project app software for BCI development. Using it, a four-choice SSVEP BCI setup was implemented and tested with five severely motor-impaired and fourteen control participants. The system showed a wide range of usability across participants, with classification rates ranging from 25-95%. The second portion of the dissertation discusses the viability of intelligent user interface design as a method for obtaining a more user-focused vocal output communication aid tailored to motor-impaired individuals. A proposed blueprint of this communication "app" was developed in this dissertation. It would make use of readily available laptop sensors to perform facial recognition, speech-to-text decoding, and geo-location. The ultimate goal is to couple sensor information with natural language processing to construct an intelligent user interface that shapes communication in a practical SSVEP-based BCI

    More playful user interfaces:interfaces that invite social and physical interaction

    Get PDF

    EMG-based eye gestures recognition for hands free interfacing

    Get PDF
    This study investigates the utilization of an Electromyography (EMG) based device to recognize five eye gestures and classify them to have a hands free interaction with different applications. The proposed eye gestures in this work includes Long Blinks, Rapid Blinks, Wink Right, Wink Left and finally Squints or frowns. The MUSE headband, which is originally a Brain Computer Interface (BCI) that measures the Electroencephalography (EEG) signals, is the device used in our study to record the EMG signals from behind the earlobes via two Smart rubber sensors and at the forehead via two other electrodes. The signals are considered as EMG once they involve the physical muscular stimulations, which are considered as artifacts for the EEG Brain signals for other studies. The experiment is conducted on 15 participants (12 Males and 3 Females) randomly as no specific groups were targeted and the session was video taped for reevaluation. The experiment starts with the calibration phase to record each gesture three times per participant through a developed Voice narration program to unify the test conditions and time intervals among all subjects. In this study, a dynamic sliding window with segmented packets is designed to faster process the data and analyze it, as well as to provide more flexibility to classify the gestures regardless their duration from one user to another. Additionally, we are using the thresholding algorithm to extract the features from all the gestures. The Rapid Blinks and the Squints were having high F1 Scores of 80.77% and 85.71% for the Trained Thresholds, as well as 87.18% and 82.12% for the Default or manually adjusted thresholds. The accuracies of the Long Blinks, Rapid Blinks and Wink Left were relatively higher with the manually adjusted thresholds, while the Squints and the Wink Right were better with the trained thresholds. However, more improvements were proposed and some were tested especially after monitoring the participants actions from the video recordings to enhance the classifier. Most of the common irregularities met are discussed within this study so as to pave the road for further similar studies to tackle them before conducting the experiments. Several applications need minimal physical or hands interactions and this study was originally a part of the project at HCI Lab, University of Stuttgart to make a hands-free switching between RGB, thermal and depth cameras integrated on or embedded in an Augmented Reality device designed for the firefighters to increase their visual capabilities in the field

    Adaptive Cognitive Interaction Systems

    Get PDF
    Adaptive kognitive Interaktionssysteme beobachten und modellieren den Zustand ihres Benutzers und passen das Systemverhalten entsprechend an. Ein solches System besteht aus drei Komponenten: Dem empirischen kognitiven Modell, dem komputationalen kognitiven Modell und dem adaptiven Interaktionsmanager. Die vorliegende Arbeit enthÀlt zahlreiche BeitrÀge zur Entwicklung dieser Komponenten sowie zu deren Kombination. Die Ergebnisse werden in zahlreichen Benutzerstudien validiert

    Design of Industrial Workplaces to relieve Workers when Interacting with Joint-Arm-Robots

    Get PDF
    A comprehensive understanding of the needs of the user is required to design adequate workplace systems in general, but especially in the highly digitised area of industry where operators are interacting with autonomously operating machines. There is little knowledge in design criteria for professionals to enable adequate developments of system design for Human-Machine Interaction, e.g. Human-Robot Collaboration regarding the effects of design decisions to all three levels of Human Factors, i.e. physiological, cognitive and organisational limitations. Moreover, there is little known about objective measurement procedures that evaluate whether the operator subjectively perceives the workplace system design as assistance and improvement. The research presented in the following is affiliated with the scientific discipline of Human Factors Engineering and focuses on the evaluation of Human Factor issues within the digitised industry. Based on broad theoretical and empirical investigations, the results of this research extend our knowledge of adequate Human-Centred Design by providing reliable, powerful design criteria for workplaces where operators interact with machines/collaborate with robots, but also an overall technique, the Objective Workload Detection Method, for evaluation of the effectiveness of design investigation focusing on cognitive stress relief. Through the application of this method within a controlled experiment, the validation of the derived design criteria was confirmed. The study significantly shows how the cognitive workload can be relieved by an assisting environment. This work also gives one best-practice design example of a self-adapting workplace system for hybrid Human-Robot Teams. Following the Human-Centred Design method, the concept of Assisting Industrial Workplace System for Human-Robot Collaboration has been successfully developed as a flexible hybrid unit design. The prototype is related to a real-world scenario from the aerospace industry and the demonstrator was implemented within a laboratory set-up. This work seamlessly applies techniques from interdisciplinary science fields, e.g. Engineering, Neuroscience, Gestalt theory, and Design. Equally, the design criteria and the evaluation method will support professionals from varied disciplines to succeed in the creation process of future system-designs by giving a clear indication of future Human-Centred Design research

    Attention, concentration, and distraction measure using EEG and eye tracking in virtual reality

    Full text link
    Attention is important in learning, Attention-deficit/hyperactivity disorder, Driving, and many other fields. Hence, intelligent tutoring systems, Attention-deficit/hyperactivity disorder diagnosis systems, and distraction detection of driver systems should be able to correctly monitor the attention levels of individuals in real time in order to estimate their attentional state. We study the feasibility of detecting distraction and concentration by monitoring participants' attention levels while they complete cognitive tasks using Electroencephalography and Eye Tracking in a virtual reality environment. Furthermore, we investigate the possibility of improving the concentration of participants using relaxation in virtual reality. We developed an indicator that estimates levels of attention with a real value using EEG data. The participant-independent indicator based on EEG data we used to assess the concentration levels of participants correctly predicts the concentration state with an accuracy (F1 = 73%). Furthermore, the participant-independent distraction model based on Eye Tracking data correctly predicted the distraction state of participants with an accuracy (F1 = 89%) in a participant-independent validation setting.La concentration est importante dans l’apprentissage, Le trouble du dĂ©ficit de l’attention avec ou sans hyperactivitĂ©, la conduite automobile et dans de nombreux autres domaines. Par consĂ©quent, les systĂšmes de tutorat intelligents, les systĂšmes de diagnostic du trouble du dĂ©ficit de l’attention avec ou sans hyperactivitĂ© et les systĂšmes de dĂ©tection de la distraction au volant devraient ĂȘtre capables de surveiller correctement les niveaux d’attention des individus en temps rĂ©el afin de dĂ©duire correctement leur Ă©tat attentionnel. Nous Ă©tudions la faisabilitĂ© de la dĂ©tection de la distraction et de la concentration en surveillant les niveaux d’attention des participants pendant qu’ils effectuent des tĂąches cognitives en utilisant l’ÉlectroencĂ©phalographie et l’Eye Tracking dans un environnement de rĂ©alitĂ© virtuelle. En outre, nous Ă©tudions la possibilitĂ© d’amĂ©liorer la concentration des participants en utilisant la relaxation en rĂ©alitĂ© virtuelle. Nous avons mis au point un indicateur qui estime les niveaux d’attention avec une valeur rĂ©elle en utilisant les donnĂ©es EEG. L’indicateur indĂ©pendant du participant basĂ© sur les donnĂ©es EEG que nous avons utilisĂ© pour Ă©valuer les niveaux de concentration des participants prĂ©dit correctement l’état de concentration avec une prĂ©cision (F1 = 73%). De plus, le modĂšle de distraction indĂ©pendant des participants, basĂ© sur les donnĂ©es d’Eye Tracking, a correctement prĂ©dit l’état de distraction des participants avec une prĂ©cision (F1 = 89%) dans un cadre de validation indĂ©pendant des participants

    AmĂ©liorer les interactions homme-machine et la prĂ©sence sociale avec l’informatique physiologique

    Get PDF
    This thesis explores how physiological computing can contribute to human-computer interaction (HCI) and foster new communication channels among the general public. We investigated how physiological sensors, such as electroencephalography (EEG), could be employed to assess the mental state of the users and how they relate to other evaluation methods. We created the first brain-computer interface that could sense visual comfort during the viewing of stereoscopic images and shaped a framework that could help to assess the over all user experience by monitoring workload, attention and error recognition.To lower the barrier between end users and physiological sensors,we participated in the software integration of a low-cost and open hardware EEG device; used off-the shelf webcams to measure heart rate remotely, crafted we arables that can quickly equip users so that electrocardiography, electrodermal activity or EEG may be measured during public exhibitions. We envisioned new usages for our sensors, that would increase social presence. In a study about human-agent interaction, participants tended to prefer virtual avatars that were mirroring their own internal state. A follow-up study focused on interactions between users to describe how physiological monitoringcould alter our relationships. Advances in HCI enabled us to seam lesslyintegrate biofeedback to the physical world. We developped Teegi, apuppet that lets novices discover by themselves about their brain activity. Finally, with Tobe, a toolkit that encompasses more sensors and give more freedom about their visualizations, we explored how such proxy shifts our representations, about our selves as well as about the others.Cette thĂšse explore comment l’informatique physiologique peut contribuer aux interactions homme-machine (IHM) et encourager l’apparition de nouveaux canaux de communication parmi le grand public. Nous avons examinĂ© comment des capteurs physiologiques,tels que l’électroencĂ©phalographie (EEG), pourraient ĂȘtre utilisĂ©s afin d’estimer l’état mental des utilisateurs et comment ils se positionnent par rapport Ă  d’autres mĂ©thodes d’évaluation. Nous avons crĂ©Ă© la premiĂšre interface cerveau-ordinateur capable de discriminer le confort visuel pendant le visionnage d’images stĂ©rĂ©oscopiques et nous avons esquissĂ© un systĂšme qui peux aider Ă  estimer l’expĂ©rience utilisateur dans son ensemble, en mesurant charge mentale, attention et reconnaissance d’erreur. Pour abaisser la barriĂšre entre utilisateurs finaux et capteurs physiologiques, nous avons participĂ© Ă  l’intĂ©gration logicielle d’un appareil EEG bon marchĂ© et libre, nous avons utilisĂ© des webcams du commerce pour mesurer le rythme cardiaque Ă  distance, nous avons confectionnĂ© des wearables dont les utilisateurs peuvent rapidement s’équiper afin qu’électrocardiographie, activitĂ© Ă©lectrodermale et EEG puissent ĂȘtre mesurĂ©es lors de manifestations publiques. Nous avons imaginĂ© de nouveaux usages pour nos capteurs, qui augmenteraient la prĂ©sence sociale. Dans une Ă©tude autour de l’interaction humain agent,les participants avaient tendance Ă  prĂ©fĂ©rer les avatars virtuels rĂ©pliquant leurs propres Ă©tats internes. Une Ă©tude ultĂ©rieure s’est concentrĂ©e sur l’interaction entre utilisateurs, profitant d’un jeu de plateau pour dĂ©crire comment l’examen de la physiologie pourrait changer nos rapports. Des avancĂ©es en IHM ont permis d’intĂ©grer de maniĂšre transparente du biofeedback au monde physique. Nous avons dĂ©veloppĂ© Teegi, une poupĂ©e qui permet aux novices d’en dĂ©couvrir plus sur leur activitĂ© cĂ©rĂ©brale, par eux-mĂȘmes. Enfin avec Tobe, un toolkit qui comprend plus de capteurs et donne plus de libertĂ© quant Ă  leurs visualisations, nous avons explorĂ© comment un tel proxy dĂ©calenos reprĂ©sentations, tant de nous-mĂȘmes que des autres

    Emotion and Stress Recognition Related Sensors and Machine Learning Technologies

    Get PDF
    This book includes impactful chapters which present scientific concepts, frameworks, architectures and ideas on sensing technologies and machine learning techniques. These are relevant in tackling the following challenges: (i) the field readiness and use of intrusive sensor systems and devices for capturing biosignals, including EEG sensor systems, ECG sensor systems and electrodermal activity sensor systems; (ii) the quality assessment and management of sensor data; (iii) data preprocessing, noise filtering and calibration concepts for biosignals; (iv) the field readiness and use of nonintrusive sensor technologies, including visual sensors, acoustic sensors, vibration sensors and piezoelectric sensors; (v) emotion recognition using mobile phones and smartwatches; (vi) body area sensor networks for emotion and stress studies; (vii) the use of experimental datasets in emotion recognition, including dataset generation principles and concepts, quality insurance and emotion elicitation material and concepts; (viii) machine learning techniques for robust emotion recognition, including graphical models, neural network methods, deep learning methods, statistical learning and multivariate empirical mode decomposition; (ix) subject-independent emotion and stress recognition concepts and systems, including facial expression-based systems, speech-based systems, EEG-based systems, ECG-based systems, electrodermal activity-based systems, multimodal recognition systems and sensor fusion concepts and (x) emotion and stress estimation and forecasting from a nonlinear dynamical system perspective

    Motion-based Interaction for Head-Mounted Displays

    Get PDF
    Recent advances in affordable sensing technologies have enabled motion-based interaction (MbI) for head-mounted displays (HMDs). Unlike traditional input devices like the mouse and keyboard, which often offer comparatively limited interaction possibilities (e.g., single-touch interaction), MbI does not have these constraints and is more natural because they reflect more closely people do things in real life. However, several issues exist in MbI for HMDs due to the technical limitations of the sensing and tracking devices, higher degrees of freedom afforded to users, and limited research in the area due to the rapid advancement of HMDs and tracking technologies. This thesis first outlines four core challenges in the design space of MbI for HMDs: (1) boundary awareness for hand-based interaction, (2) efficient hands-free head-based interface for HMDs, (3) efficient and feasible full-body interaction for general tasks with HMDs, and (4) accessible full-body interaction for applications in HMDs. Then, this thesis presents an investigation into the contributions of these challenges in MbI for HMDs. The first challenge is addressed by providing visual feedback during interaction tailored for such technologies. The second challenge is addressed by using a circular layout with a go-and-hit selection style for head-based interaction using text entry as the scenario. In addition, this thesis explores additional interaction mechanisms that leverage the affordances of these techniques, and in doing so, we propose directional full-body motions as an interaction approach to perform general tasks with HDMs as an example to address the third challenge. The last challenge is addressed by (1) exploring the differences between performing full-body interaction for HMDs and common displays (i.e., TV) and (2) providing a set of design guidelines that are specific to current and future HMDs. The results of this thesis show that: (1) visual methods for boundary awareness can help with mid-air hand-based interaction in HMDs; (2) head-based interaction and interfaces that take advantages of MbI, such as a circular interface, can be very efficient and low error hands-free input method for HMDs; (3) directional full-body interaction can be a feasible and efficient interaction approach for general tasks involving HMDs; (4) full-body interaction for applications in HMDs should be designed differently than for traditional displays. In addition to these results, this thesis provides a set of design recommendations and takeaway messages for MbI for HMDs
    • 

    corecore