33 research outputs found

    Machine learning methods for the study of cybersickness: a systematic review

    Get PDF
    This systematic review offers a world-first critical analysis of machine learning methods and systems, along with future directions for the study of cybersickness induced by virtual reality (VR). VR is becoming increasingly popular and is an important part of current advances in human training, therapies, entertainment, and access to the metaverse. Usage of this technology is limited by cybersickness, a common debilitating condition experienced upon VR immersion. Cybersickness is accompanied by a mix of symptoms including nausea, dizziness, fatigue and oculomotor disturbances. Machine learning can be used to identify cybersickness and is a step towards overcoming these physiological limitations. Practical implementation of this is possible with optimised data collection from wearable devices and appropriate algorithms that incorporate advanced machine learning approaches. The present systematic review focuses on 26 selected studies. These concern machine learning of biometric and neuro-physiological signals obtained from wearable devices for the automatic identification of cybersickness. The methods, data processing and machine learning architecture, as well as suggestions for future exploration on detection and prediction of cybersickness are explored. A wide range of immersion environments, participant activity, features and machine learning architectures were identified. Although models for cybersickness detection have been developed, literature still lacks a model for the prediction of first-instance events. Future research is pointed towards goal-oriented data selection and labelling, as well as the use of brain-inspired spiking neural network models to achieve better accuracy and understanding of complex spatio-temporal brain processes related to cybersickness

    Varieties of Attractiveness and their Brain Responses

    Get PDF

    Science of Facial Attractiveness

    Get PDF

    Emotion and Stress Recognition Related Sensors and Machine Learning Technologies

    Get PDF
    This book includes impactful chapters which present scientific concepts, frameworks, architectures and ideas on sensing technologies and machine learning techniques. These are relevant in tackling the following challenges: (i) the field readiness and use of intrusive sensor systems and devices for capturing biosignals, including EEG sensor systems, ECG sensor systems and electrodermal activity sensor systems; (ii) the quality assessment and management of sensor data; (iii) data preprocessing, noise filtering and calibration concepts for biosignals; (iv) the field readiness and use of nonintrusive sensor technologies, including visual sensors, acoustic sensors, vibration sensors and piezoelectric sensors; (v) emotion recognition using mobile phones and smartwatches; (vi) body area sensor networks for emotion and stress studies; (vii) the use of experimental datasets in emotion recognition, including dataset generation principles and concepts, quality insurance and emotion elicitation material and concepts; (viii) machine learning techniques for robust emotion recognition, including graphical models, neural network methods, deep learning methods, statistical learning and multivariate empirical mode decomposition; (ix) subject-independent emotion and stress recognition concepts and systems, including facial expression-based systems, speech-based systems, EEG-based systems, ECG-based systems, electrodermal activity-based systems, multimodal recognition systems and sensor fusion concepts and (x) emotion and stress estimation and forecasting from a nonlinear dynamical system perspective

    ChallengeDetect : Investigating the Potential of Detecting In-Game Challenge Experience from Physiological Measures

    Get PDF
    Challenge is the core element of digital games. The wide spectrum of physical, cognitive, and emotional challenge experiences provided by modern digital games can be evaluated subjectively using a questionnaire, the CORGIS, which allows for a post hoc evaluation of the overall experience that occurred during game play. Measuring this experience dynamically and objectively, however, would allow for a more holistic view of the moment-to-moment experiences of players. This study, therefore, explored the potential of detecting perceived challenge from physiological signals. For this, we collected physiological responses from 32 players who engaged in three typical game scenarios. Using perceived challenge ratings from players and extracted physiological features, we applied multiple machine learning methods and metrics to detect challenge experiences. Results show that most methods achieved a detection accuracy of around 80%. We discuss in-game challenge perception, challenge-related physiological indicators and AI-supported challenge detection to inform future work on challenge evaluation

    A framework to measure human behaviour whilst reading

    Get PDF
    The brain is the most complex object in the known universe that gives a sense of being to humans and characterises human behaviour. Building models of brain functions is perhaps the most fascinating scientific challenge in the 21st century. Reading is a significant cognitive process in the human brain that plays a critical role in the vital process of learning and in performing some daily activities. The study of human behaviour during reading has been an area of interest for researchers in different fields of science. This thesis is based upon providing a novel framework, called ARSAT (Assisting Researchers in the Selection of Appropriate Technologies), that measures the behaviour of humans when reading text. The ARSAT framework aims at assisting researchers in the selection and application of appropriate technologies to measure the behaviour of a person who is reading text. The ARSAT framework will assist to researchers who investigate the reading process and find it difficult to select appropriate theories, metrics, data collection methods and data analytics techniques. The ARSAT framework enhances the ability of its users to select appropriate metrics indicating the effective factors on the characterisation of different aspects of human behaviour during the reading process. As will be shown in this research study, human behaviour is characterised by a complicated interplay of action, cognition and emotion. The ARSAT framework also facilitates selecting appropriate sensory technologies that can be used to monitor and collect data for the metrics. Moreover, this research study will introduce BehaveNet, a novel Deep Learning modelling approach, which can be used for training Deep Learning models of human behaviour from the sensory data collected. In this thesis, a comprehensive literature study is presented that was conducted to acquire adequate knowledge for designing the ARSAT framework. In order to identify the contributing factors that affect the reading process, an overview of some existing theories of the reading process is provided. Furthermore, a number of sensory technologies and techniques that can be applied to monitoring the changes in the metrics indicating the factors are also demonstrated. Only, the technologies that are commercially available on the market are recommended by the ARSAT framework. A variety of Machine Learning techniques were also investigated when designing the BehaveNet. The BehaveNet takes advantage of the complementarity of Convolutional Neural Networks, Long Short-Term Memory networks and Deep Neural Networks. The design of a Human Behaviour Monitoring System (HBMS), by utilising the ARSAT framework for recognising three attention-seeking activities of humans, is also presented in this research study. Reading printed text, as well as speaking out loudly and watching a programme on TV were proposed as activities that a person unintentionally may shift his/her attention from reading into distractions. Between sensory devices recommended by the ARSAT framework, the Muse headband which is an Electroencephalography (EEG) and head motion-sensing wearable device, was selected to track the forehead EEG and a person’s head movements. The EEG and 3-axes accelerometer data were recorded from eight participants when they read printed text, as well as the time they performed two other activities. An imbalanced dataset consisting over 1.2 million rows of noisy data was created and used to build a model of the activities (60% training and 20% validating data) and evaluating the model (20% of the data). The efficiency of the framework is demonstrated by comparing the performance of the models built by utilising the BehaveNet, with the models built by utilising a number of competing Deep Learning models for raw EEG and accelerometer data, that have attained state-of-the-art performance. The classification results are evaluated by some metrics including the classification accuracy, F1 score, confusion matrix, Receiver Operating Characteristic curve, and Area under Curve (AUC) score. By considering the results, the BehaveNet contributed to the body of knowledge as an approach for measuring human behaviour by using sensory devices. In comparison with the performance of the other models, the models built by utilising the BehaveNet, attained better performance when classifying data of two EEG channels (Accuracy = 95%; AUC=0.99; F1 = 0.95), data of a single EEG channel (Accuracy = 85%; AUC=0.96; F1 = 0.83), accelerometer data (Accuracy = 81%; AUC = 0.9; F1 = 0.76) and all of the data in the dataset (Accuracy = 97%; AUC = 0.99; F1 = 0.96). The dataset and the source code of this project are also published on the Internet to help the science community. The Muse headband is also shown to be an economical and standard wearable device that can be successfully used in behavioural research

    A framework to measure human behaviour whilst reading

    Get PDF
    The brain is the most complex object in the known universe that gives a sense of being to humans and characterises human behaviour. Building models of brain functions is perhaps the most fascinating scientific challenge in the 21st century. Reading is a significant cognitive process in the human brain that plays a critical role in the vital process of learning and in performing some daily activities. The study of human behaviour during reading has been an area of interest for researchers in different fields of science. This thesis is based upon providing a novel framework, called ARSAT (Assisting Researchers in the Selection of Appropriate Technologies), that measures the behaviour of humans when reading text. The ARSAT framework aims at assisting researchers in the selection and application of appropriate technologies to measure the behaviour of a person who is reading text. The ARSAT framework will assist to researchers who investigate the reading process and find it difficult to select appropriate theories, metrics, data collection methods and data analytics techniques. The ARSAT framework enhances the ability of its users to select appropriate metrics indicating the effective factors on the characterisation of different aspects of human behaviour during the reading process. As will be shown in this research study, human behaviour is characterised by a complicated interplay of action, cognition and emotion. The ARSAT framework also facilitates selecting appropriate sensory technologies that can be used to monitor and collect data for the metrics. Moreover, this research study will introduce BehaveNet, a novel Deep Learning modelling approach, which can be used for training Deep Learning models of human behaviour from the sensory data collected. In this thesis, a comprehensive literature study is presented that was conducted to acquire adequate knowledge for designing the ARSAT framework. In order to identify the contributing factors that affect the reading process, an overview of some existing theories of the reading process is provided. Furthermore, a number of sensory technologies and techniques that can be applied to monitoring the changes in the metrics indicating the factors are also demonstrated. Only, the technologies that are commercially available on the market are recommended by the ARSAT framework. A variety of Machine Learning techniques were also investigated when designing the BehaveNet. The BehaveNet takes advantage of the complementarity of Convolutional Neural Networks, Long Short-Term Memory networks and Deep Neural Networks. The design of a Human Behaviour Monitoring System (HBMS), by utilising the ARSAT framework for recognising three attention-seeking activities of humans, is also presented in this research study. Reading printed text, as well as speaking out loudly and watching a programme on TV were proposed as activities that a person unintentionally may shift his/her attention from reading into distractions. Between sensory devices recommended by the ARSAT framework, the Muse headband which is an Electroencephalography (EEG) and head motion-sensing wearable device, was selected to track the forehead EEG and a person’s head movements. The EEG and 3-axes accelerometer data were recorded from eight participants when they read printed text, as well as the time they performed two other activities. An imbalanced dataset consisting over 1.2 million rows of noisy data was created and used to build a model of the activities (60% training and 20% validating data) and evaluating the model (20% of the data). The efficiency of the framework is demonstrated by comparing the performance of the models built by utilising the BehaveNet, with the models built by utilising a number of competing Deep Learning models for raw EEG and accelerometer data, that have attained state-of-the-art performance. The classification results are evaluated by some metrics including the classification accuracy, F1 score, confusion matrix, Receiver Operating Characteristic curve, and Area under Curve (AUC) score. By considering the results, the BehaveNet contributed to the body of knowledge as an approach for measuring human behaviour by using sensory devices. In comparison with the performance of the other models, the models built by utilising the BehaveNet, attained better performance when classifying data of two EEG channels (Accuracy = 95%; AUC=0.99; F1 = 0.95), data of a single EEG channel (Accuracy = 85%; AUC=0.96; F1 = 0.83), accelerometer data (Accuracy = 81%; AUC = 0.9; F1 = 0.76) and all of the data in the dataset (Accuracy = 97%; AUC = 0.99; F1 = 0.96). The dataset and the source code of this project are also published on the Internet to help the science community. The Muse headband is also shown to be an economical and standard wearable device that can be successfully used in behavioural research

    Wearable Biosensors to Understand Construction Workers' Mental and Physical Stress

    Full text link
    Occupational stress is defined as harmful physical and mental responses when job requirements are greater than a worker's capacity. Construction is one of the most stressful occupations because it involves physiologically and psychologically demanding tasks performed in a hazardous environment this stress can jeopardize construction safety, health, and productivity. Various instruments, such as surveys and interviews, have been used for measuring workers’ perceived mental and physical stress. However valuable, such instruments are limited by their invasiveness, which prevents them from being used for continuous stress monitoring. The recent advancement of wearable biosensors has opened a new door toward the non-invasive collection of a field worker’s physiological signals that can be used to assess their mental and physical status. Despite these advancements, challenges remain: acquiring physiological signals from wearable biosensors can be easily contaminated from diverse sources of signal noise. Further, the potential of these devices to assess field workers’ mental and physical status has not been examined in the naturalistic work environment. To address these issues, this research aims to propose and validate a comprehensive and efficient stress-measurement framework that recognizes workers mental and physical stress in a naturalistic environment. The focus of this research is on two wearable biosensors. First, a wearable EEG headset, which is a direct measurement of brain waves with the minimal time lag, but it is highly vulnerable to various artifacts. Second, a very convenient wristband-type biosensor, which may be used as a means for assessing both mental and physical stress, but there is a time lag between when subjects are exposed to stressors and when their physiological signals change. To achieve this goal, five interrelated and interdisciplinary studies were performed to; 1) acquire high-quality EEG signals from the job site; 2) assess construction workers’ emotion by measuring the valence and arousal level by analyzing the patterns of construction workers’ brainwaves; 3) recognize mental stress in the field based on brain activities by applying supervised-learning algorithms;4) recognize real-time mental stress by applying Online Multi-Task Learning (OMTL) algorithms; and 5) assess workers’ mental and physical stress using signals collected from a wristband biosensor. To examine the performance of the proposed framework, we collected physiological signals from 21 workers at five job sites. Results yielded a high of 80.13% mental stress-recognition accuracy using an EEG headset and 90.00% physical stress-recognition accuracy using a wristband sensor. These results are promising given that stress recognition with wired physiological devices within a controlled lab setting in the clinical domain has, at best, a similar level of accuracy. The proposed wearable biosensor-based, stress-recognition framework is expected to help us better understand workplace stressors and improve worker safety, health, and productivity through early detection and mitigation of stress at human-centered, smart and connected construction sites.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/149965/1/hjebelli_1.pd
    corecore