32 research outputs found

    A LightGBM-Based EEG Analysis Method for Driver Mental States Classification

    Get PDF
    Fatigue driving can easily lead to road traffic accidents and bring great harm to individuals and families. Recently, electroencephalography- (EEG-) based physiological and brain activities for fatigue detection have been increasingly investigated. However, how to find an effective method or model to timely and efficiently detect the mental states of drivers still remains a challenge. In this paper, we combine common spatial pattern (CSP) and propose a light-weighted classifier, LightFD, which is based on gradient boosting framework for EEG mental states identification. ,e comparable results with traditional classifiers, such as support vector machine (SVM), convolutional neural network (CNN), gated recurrent unit (GRU), and large margin nearest neighbor (LMNN), show that the proposed model could achieve better classification performance, as well as the decision efficiency. Furthermore, we also test and validate that LightFD has better transfer learning performance in EEG classification of driver mental states. In summary, our proposed LightFD classifier has better performance in real-time EEG mental state prediction, and it is expected to have broad application prospects in practical brain-computer interaction (BCI)

    Learning multimodal representations for drowsiness detection

    Get PDF

    Sensors and Systems for Monitoring Mental Fatigue: A systematic review

    Full text link
    Mental fatigue is a leading cause of motor vehicle accidents, medical errors, loss of workplace productivity, and student disengagements in e-learning environment. Development of sensors and systems that can reliably track mental fatigue can prevent accidents, reduce errors, and help increase workplace productivity. This review provides a critical summary of theoretical models of mental fatigue, a description of key enabling sensor technologies, and a systematic review of recent studies using biosensor-based systems for tracking mental fatigue in humans. We conducted a systematic search and review of recent literature which focused on detection and tracking of mental fatigue in humans. The search yielded 57 studies (N=1082), majority of which used electroencephalography (EEG) based sensors for tracking mental fatigue. We found that EEG-based sensors can provide a moderate to good sensitivity for fatigue detection. Notably, we found no incremental benefit of using high-density EEG sensors for application in mental fatigue detection. Given the findings, we provide a critical discussion on the integration of wearable EEG and ambient sensors in the context of achieving real-world monitoring. Future work required to advance and adapt the technologies toward widespread deployment of wearable sensors and systems for fatigue monitoring in semi-autonomous and autonomous industries is examined.Comment: 19 Pages, 3 Figure

    Instanceeasytl: an improved transfer-learning method for EEG-based cross-subject fatigue detection

    Get PDF
    Electroencephalogram (EEG) is an effective indicator for the detection of driver fatigue. Due to the significant differences in EEG signals across subjects, and difficulty in collecting sufficient EEG samples for analysis during driving, detecting fatigue across subjects through using EEG signals remains a challenge. EasyTL is a kind of transfer-learning model, which has demonstrated better performance in the field of image recognition, but not yet been applied in cross-subject EEG-based applications. In this paper, we propose an improved EasyTL-based classifier, the InstanceEasyTL, to perform EEG-based analysis for cross-subject fatigue mental-state detection. Experimental results show that InstanceEasyTL not only requires less EEG data, but also obtains better performance in accuracy and robustness than EasyTL, as well as existing machine-learning models such as Support Vector Machine (SVM), Transfer Component Analysis (TCA), Geodesic Flow Kernel (GFK), and Domain-adversarial Neural Networks (DANN), etc

    Implementasi Metode LightGBM Untuk Klasifikasi Kondisi Abnormal Pada Pengemudi Sepeda Motor Berbasis Sensor Smartphone

    Get PDF
    Traffic accident is one of the most significant contributors which makes the death number is increasing around the world. With the demographic condition from Indonesia, motorcycle driver is the types of the driver that dominated the traffic, therefore increasing the probability of caught in a traffic accident The existing Vehicle Activity Detection System (VADS) mainly focused on the car driver, with the main problem is that the computational time from the system is too high to be implemented on a real-time condition. To solve this problem, in this research, a classification system for abnormal driving behavior from motorcycle drivers is created, using Light Gradient Boosting Machine (LightGBM) model. The system is designed to be lightweight in computation and very fast in response to the changes of the activities with a high velocity. To train the LightGBM model, the data from Accelerometer and Gyroscope sensor, that has been integrated into a smartphone, will be used to detect the movement from a driver. The accuracy rate from the proposed model is reaching 82% on the test dataset and shows a promising result of around 70% on the real-time detection process. With a computational time of around 10ms, the proposed system is able to work 5 times faster than the existing system.Kecelakaan lalu lintas merupakan salah satu penyebab angka kematian yang cukup tinggi Dengan kondisi demografis di Indonesia, di mana pengendara sepeda motor adalah tipe yang mendominasi lalu lintas jalan raya, sehingga resiko tertimpa kecelakaan lalu lintas leboh tinggi dibanding pengendara lain. Sistem deteksi aktivitas pada kendaraan bermotor yang telah banyak dibangun umumnya terfokus pada pengemudi mobil, dan memiliki masalah utama di waktu komputasi yang tinggi. Untuk mengatasi permasalahan ini, dalam penelitian kali ini, dibuat suatu sistem deteksi aktivitas abnormal dari pengendara sepeda motor dengan menggunakan metode Light Gradient Boosting Machine (LightGBM). Sistem tersebut didesain untuk memiliki waktu komputasi yang rendah dan dapat menghasilkan respons yang cepat terhadap perubahan gerakan yang terjadi dalam kecepatan tinggi. Untuk melakukan proses pelatihan model LightGBM, akan digunakan data yang berasal dari sensor Accelerometer dan Gyroscope yang tedapat pada smartphone, yang akan digunakan untuk mendeteksi gerakan yang dilakukan oleh seorang pengendara. Model yang didapat dari proses pelatihan dengan menggunakan data yang telah dikumpulkan menunjukkan tingkat akurasi setinggi 82% pada pengetesan menggunakan data yang telah disiapkan, dan  menunjukkan  akurasi hampir 70% dalam proses deteksi secara real-time, dengan waktu komputasi  10 mili detik, membuktikan bahwa sistem yang didesain bekerja 5 kali lipat lebih cepat dibanding sistem yang telah ada

    Using Eye-tracking Data to Predict Situation Awareness in Real Time during Takeover Transitions in Conditionally Automated Driving

    Get PDF
    Situation awareness (SA) is critical to improving takeover performance during the transition period from automated driving to manual driving. Although many studies measured SA during or after the driving task, few studies have attempted to predict SA in real time in automated driving. In this work, we propose to predict SA during the takeover transition period in conditionally automated driving using eye-tracking and self-reported data. First, a tree ensemble machine learning model, named LightGBM (Light Gradient Boosting Machine), was used to predict SA. Second, in order to understand what factors influenced SA and how, SHAP (SHapley Additive exPlanations) values of individual predictor variables in the LightGBM model were calculated. These SHAP values explained the prediction model by identifying the most important factors and their effects on SA, which further improved the model performance of LightGBM through feature selection. We standardized SA between 0 and 1 by aggregating three performance measures (i.e., placement, distance, and speed estimation of vehicles with regard to the ego-vehicle) of SA in recreating simulated driving scenarios, after 33 participants viewed 32 videos with six lengths between 1 and 20 s. Using only eye-tracking data, our proposed model outperformed other selected machine learning models, having a root-mean-squared error (RMSE) of 0.121, a mean absolute error (MAE) of 0.096, and a 0.719 correlation coefficient between the predicted SA and the ground truth. The code is available at https://github.com/refengchou/Situation-awareness-prediction. Our proposed model provided important implications on how to monitor and predict SA in real time in automated driving using eye-tracking data.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/167003/1/hkwggmgngbsqcmmqkbffywbrjtcmhhxx.pdfDescription of hkwggmgngbsqcmmqkbffywbrjtcmhhxx.pdf : Mian articleSEL

    Past, Present, and Future of EEG-Based BCI Applications

    Get PDF
    An electroencephalography (EEG)-based brain–computer interface (BCI) is a system that provides a pathway between the brain and external devices by interpreting EEG. EEG-based BCI applications have initially been developed for medical purposes, with the aim of facilitating the return of patients to normal life. In addition to the initial aim, EEG-based BCI applications have also gained increasing significance in the non-medical domain, improving the life of healthy people, for instance, by making it more efficient, collaborative and helping develop themselves. The objective of this review is to give a systematic overview of the literature on EEG-based BCI applications from the period of 2009 until 2019. The systematic literature review has been prepared based on three databases PubMed, Web of Science and Scopus. This review was conducted following the PRISMA model. In this review, 202 publications were selected based on specific eligibility criteria. The distribution of the research between the medical and non-medical domain has been analyzed and further categorized into fields of research within the reviewed domains. In this review, the equipment used for gathering EEG data and signal processing methods have also been reviewed. Additionally, current challenges in the field and possibilities for the future have been analyzed

    Assessment of Human Behavior in Virtual Reality by Eye Tracking

    Get PDF
    Virtual reality (VR) is not a new technology but has been in development for decades, driven by advances in computer technology such as computer graphics, simulation, visualization, hardware and software, and human-computer interaction. Currently, VR technology is increasingly being used in applications to enable immersive, yet controlled research settings. Education and entertainment are two important application areas, where VR has been considered a key enabler of immersive experiences and their further advancement. At the same time, the study of human behavior in such innovative environments is expected to contribute to a better design of VR applications. Therefore, modern VR devices are consistently equipped with eye-tracking technology, enabling thus further studies of human behavior through the collection of process data. In particular, eye-tracking technology in combination with machine learning techniques and explainable models can provide new insights for a deeper understanding of human behavior during immersion in virtual environments. In this work, a systematic computational framework based on eye-tracking and behavioral user data and state-of-the-art machine learning approaches is proposed to understand human behavior and individual differences in VR contexts. This computational framework is then employed in three user studies across two different domains, namely education, and entertainment. In the educational domain, the exploration of human behavior during educational activities is a timely and challenging question that can only be addressed in an interdisciplinary setting, to which educational VR platforms such as immersive VR classrooms can contribute. In this way, two different immersive VR classrooms were created where students can learn computational thinking skills and teachers can train in classroom management. Students' and teachers' visual perception and cognitive processing behaviors are investigated using eye-tracking data and machine learning techniques in combination with explainable models. Results show that eye movements reveal different human behaviors as well as individual differences during immersion in VR, providing important insights for immersive and effective VR classroom design. In terms of VR entertainment, eye movements open a new avenue to evaluate VR locomotion techniques from the perspective of user cognitive load and user experience using machine learning methods. Research in two domains demonstrates the effectiveness of eye movements as a proxy for evaluating human behavior in educational and entertainment VR contexts. In summary, this work paves the way for assessing human behavior in VR scenarios and provides profound insights into the way of designing, evaluating, and improving interactive VR systems. In particular, more effective and customizable virtual environments can be created to provide users with tailored experiences
    corecore