93 research outputs found

    Exploring Machine Learning Approaches for Classifying Mental Workload using fNIRS Data from HCI Tasks

    Get PDF
    Functional Near-Infrared Spectroscopy (fNIRS) has shown promise for being potentially more suitable (than e.g. EEG) for brain-based Human Computer Interaction (HCI). While some machine learning approaches have been used in prior HCI work, this paper explores different approaches and configurations for classifying Mental Workload (MWL) from a continuous HCI task, to identify and understand potential limitations and data processing decisions. In particular, we investigate three overall approaches: a logistic regression method, a supervised shallow method (SVM), and a supervised deep learning method (CNN). We examine personalised and gen-eralised models, as well as consider different features and ways of labelling the data. Our initial explorations show that generalised models can perform as well as personalised ones and that deep learning can be a suitable approach for medium size datasets. To provide additional practical advice for future brain-computer interaction systems, we conclude by discussing the limitations and data-preparation needs of different machine learning approaches. We also make recommendations for avenues of future work that are most promising for the machine learning of fNIRS data

    Framework for Electroencephalography-based Evaluation of User Experience

    Get PDF
    Measuring brain activity with electroencephalography (EEG) is mature enough to assess mental states. Combined with existing methods, such tool can be used to strengthen the understanding of user experience. We contribute a set of methods to estimate continuously the user's mental workload, attention and recognition of interaction errors during different interaction tasks. We validate these measures on a controlled virtual environment and show how they can be used to compare different interaction techniques or devices, by comparing here a keyboard and a touch-based interface. Thanks to such a framework, EEG becomes a promising method to improve the overall usability of complex computer systems.Comment: in ACM. CHI '16 - SIGCHI Conference on Human Factors in Computing System, May 2016, San Jose, United State

    Machine Learning Methods for functional Near Infrared Spectroscopy

    Get PDF
    Identification of user state is of interest in a wide range of disciplines that fall under the umbrella of human machine interaction. Functional Near Infra-Red Spectroscopy (fNIRS) device is a relatively new device that enables inference of brain activity through non-invasively pulsing infra-red light into the brain. The fNIRS device is particularly useful as it has a better spatial resolution than the Electroencephalograph (EEG) device that is most commonly used in Human Computer Interaction studies under ecologically valid settings. But this key advantage of fNIRS device is underutilized in current literature in the fNIRS domain. We propose machine learning methods that capture this spatial nature of the human brain activity using a novel preprocessing method that uses `Region of Interest\u27 based feature extraction. Experiments show that this method outperforms the F1 score achieved previously in classifying `low\u27 vs `high\u27 valence state of a user. We further our analysis by applying a Convolutional Neural Network (CNN) to the fNIRS data, thus preserving the spatial structure of the data and treating the data similar to a series of images to be classified. Going further, we use a combination of CNN and Long Short-Term Memory (LSTM) to capture the spatial and temporal behavior of the fNIRS data, thus treating it similar to a video classification problem. We show that this method improves upon the accuracy previously obtained by valence classification methods using EEG or fNIRS devices. Finally, we apply the above model to a problem in classifying combined task-load and performance in an across-subject, across-task scenario of a Human Machine Teaming environment in order to achieve optimal productivity of the system

    Moving from brain-computer interfaces to personal cognitive informatics

    Get PDF
    Consumer neurotechnology is arriving en masse, even while algorithms for user state estimation are being actively defined and developed. Indeed, many consumable wearables are now available that try to estimate cognitive changes from wrist data or body movement. But does this data help people? It's a critical time to ask how users could be informed by wearable neurotechnology, in a way that would be relevant to their needs and serve their personal well-being. The aim of this SIG is to bring together the key HCI communities needed to address this: personal informatics, digital health and wellbeing, neuroergonomics, and neuroethics

    Exploring the use of brain-sensing technologies for natural interactions

    Get PDF
    Recent technical innovation in the field of Brain-Computer Interfaces (BCIs) has increased the opportunity for including physical, brain-sensing devices as a part of our day-to-day lives. The potential for obtaining a time-correlated, direct, brain-based measure of a participant's mental activity is an alluring and important development for HCI researchers. In this work, we investigate the application of BCI hardware for answering HCI centred research questions, in turn, fusing the two disciplines to form an approach we name - Brain based Human-Computer Interaction (BHCI). We investigate the possibility of using BHCI to provide natural interaction - an ideal form of HCI, where communication between man-and-machine is indistinguishable from everyday forms of interactions such as Speaking and Gesturing. We present the development, execution and output of three user studies investigating the application of BHCI. We evaluate two technologies, fNIRS and EEG, and investigate their suitability for supporting BHCI based interactions. Through our initial studies, we identify that the lightweight and portable attributes of EEG make it preferable for use in developing natural interactions. Building upon this, we develop an EEG based cinematic experience exploring natural forms of interaction through the mind of the viewer. In studying the viewers response to this experience, we were able to develop a taxonomy of control based on how viewers discovered and exerted control over the experience

    Adaptive Cognitive Interaction Systems

    Get PDF
    Adaptive kognitive Interaktionssysteme beobachten und modellieren den Zustand ihres Benutzers und passen das Systemverhalten entsprechend an. Ein solches System besteht aus drei Komponenten: Dem empirischen kognitiven Modell, dem komputationalen kognitiven Modell und dem adaptiven Interaktionsmanager. Die vorliegende Arbeit enthält zahlreiche Beiträge zur Entwicklung dieser Komponenten sowie zu deren Kombination. Die Ergebnisse werden in zahlreichen Benutzerstudien validiert

    Exploring the use of brain-sensing technologies for natural interactions

    Get PDF
    Recent technical innovation in the field of Brain-Computer Interfaces (BCIs) has increased the opportunity for including physical, brain-sensing devices as a part of our day-to-day lives. The potential for obtaining a time-correlated, direct, brain-based measure of a participant's mental activity is an alluring and important development for HCI researchers. In this work, we investigate the application of BCI hardware for answering HCI centred research questions, in turn, fusing the two disciplines to form an approach we name - Brain based Human-Computer Interaction (BHCI). We investigate the possibility of using BHCI to provide natural interaction - an ideal form of HCI, where communication between man-and-machine is indistinguishable from everyday forms of interactions such as Speaking and Gesturing. We present the development, execution and output of three user studies investigating the application of BHCI. We evaluate two technologies, fNIRS and EEG, and investigate their suitability for supporting BHCI based interactions. Through our initial studies, we identify that the lightweight and portable attributes of EEG make it preferable for use in developing natural interactions. Building upon this, we develop an EEG based cinematic experience exploring natural forms of interaction through the mind of the viewer. In studying the viewers response to this experience, we were able to develop a taxonomy of control based on how viewers discovered and exerted control over the experience

    Multi-Label/Multi-Class Deep Learning Classification of Spatiotemporal Data

    Get PDF
    Human senses allow for the detection of simultaneous changes in our environments. An unobstructed field of view allows us to notice concurrent variations in different parts of what we are looking at. For example, when playing a video game, a player, oftentimes, needs to be aware of what is happening in the entire scene. Likewise, our hearing makes us aware of various simultaneous sounds occurring around us. Human perception can be affected by the cognitive ability of the brain and acuity of the senses. This is not a factor with machines. As long as a system is given a signal and instructed how to analyze this signal and extract useful information, it will be able to complete this task repeatedly with enough processing power. Automated and simultaneous detection of activity in machine learning requires the use of multi-labels. In order to detect concurrent occurrences spatially, the labels should represent the regions of interest for a particular application. For example, in this thesis, the regions of interest will be either different quadrants of a parking lot as captured on surveillance videos, four auscultation sites on patients\u27 lungs, or the two sides of the brain\u27s motor cortex (left and right). Since the labels, within the multi-labels, will be used to represent not only certain spatial locations but also different levels or types of occurrences, a multi-class/multi-level schema is necessary. In the first study, each label is appointed one of three levels of activity within the specific quadrant. In the second study, each label is assigned one of four different types of respiratory sounds. In the third study, each label is designated one of three different finger tapping frequencies. This novel multi-labeling/multi-class schema is one part of being able to detect useful information in the data. The other part of the process lies in the machine learning algorithm, the network model. In order to be able to capture the spatiotemporal characteristics of the data, selecting Convolutional Neural Network and Long Short Term Memory Network-based algorithms as the basis of the network is fitting. The following classifications are described in this thesis: 1. In the first study, one of three different motion densities are identified simultaneously in four quadrants of two sets of surveillance videos. Publicly available video recordings are the spatiotemporal data. 2. In the second study, one of four types of breathing sounds are classified simultaneously in four auscultation sites. The spatiotemporal data are publicly available respiratory sound recordings. 3. In the third study, one of three finger tapping rates are detected simultaneously in two regions of interest, the right and left sides of the brain\u27s motor cortex. The spatiotemporal data are fNIRS channel readings gathered during an index finger tapping experiment. Classification results are based on testing data which is not part of model training and validation. The success of the results is based on measures of Hamming Loss and Subset Accuracy as well Accuracy, F-Score, Sensitivity, and Specificity metrics. In the last study, model explanation is performed using Shapley Additive Explanation (SHAP) values and plotting them on an image-like background, a representation of the fNIRS channel layout used as data input. Overall, promising findings support the use of this approach in classifying spatiotemporal data with the interest of detecting different levels or types of occurrences simultaneously in several regions of interest
    • …
    corecore