6,679 research outputs found

    DETECTING EMOTIONAL STRESS FROM FACIAL EXPRESSIONS FOR DRIVING SAFETY

    Get PDF
    Monitoring the attentive and emotional status of the driver is critical for the safety and comfort of driving. In this work a real-time non-intrusive monitoring system is developed, which detects the emotional states of the driver by analyzing facial expressions. The system considers two negative basic emotions, anger and disgust, as stress related emotions. We detect an individual emotion in each video frame and the decision on the stress level is made on sequence level. Experimental results show that the developed system operates very well on simulated data even with generic models. An additional pose normalization step reduces the impact of pose mismatch due to camera setup and pose variation, and hence improves the detection accuracy further

    Psychophysiological responses to takeover requests in conditionally automated driving

    Get PDF
    In SAE Level 3 automated driving, taking over control from automation raises significant safety concerns because drivers out of the vehicle control loop have difficulty negotiating takeover transitions. Existing studies on takeover transitions have focused on drivers' behavioral responses to takeover requests (TORs). As a complement, this exploratory study aimed to examine drivers' psychophysiological responses to TORs as a result of varying non-driving-related tasks (NDRTs), traffic density and TOR lead time. A total number of 102 drivers were recruited and each of them experienced 8 takeover events in a high fidelity fixed-base driving simulator. Drivers' gaze behaviors, heart rate (HR) activities, galvanic skin responses (GSRs), and facial expressions were recorded and analyzed during two stages. First, during the automated driving stage, we found that drivers had lower heart rate variability, narrower horizontal gaze dispersion, and shorter eyes-on-road time when they had a high level of cognitive load relative to a low level of cognitive load. Second, during the takeover transition stage, 4s lead time led to inhibited blink numbers and larger maximum and mean GSR phasic activation compared to 7s lead time, whilst heavy traffic density resulted in increased HR acceleration patterns than light traffic density. Our results showed that psychophysiological measures can indicate specific internal states of drivers, including their workload, emotions, attention, and situation awareness in a continuous, non-invasive and real-time manner. The findings provide additional support for the value of using psychophysiological measures in automated driving and for future applications in driver monitoring systems and adaptive alert systems.University of Michigan McityPeer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/162593/1/AAP_physiological_responses_HF_template.pdfSEL

    A multimodal system for stress detection

    Get PDF
    Stress is the physiological or psychological response to internal or external factors, which can happen in short or long terms. Prolonged stress can be harmful since it affects the body, negatively, in several ways, thus contributing to mental and physical health problems. Although stress is not simple to properly identify, there are several studied approaches that solidify the existence of a correlation between stress and perceivable human features. In order to detect stress, there are several approaches that can be taken into consideration. However, this task is more difficult in uncontrolled environments and where non-invasive methods are required. Heart Rate Variability (HRV), facial expressions, eye blinks, pupil diameter and PERCLOS (percentage of eye closure) consist in non-invasive approaches, proved capable to accurately identify the mental stress present in people. For this project, the users’ physiological signals were collected by an external video-based application, in a non-invasive way. Moreover, data from a brief questionnaire was also used to complement the physiological data. After the proposed solution was implemented and tested, it was concluded that the best algorithm for stress detection was the random forest classifier, which managed to obtain a final result of 84.04% accuracy, with 94.89% recall and 87.88% f1 score. This solution uses HRV data, facial expressions, PERCLOS and some personal characteristics of the userO stress é a resposta fisiológica ou psicológica a fatores internos ou externos, o que pode acontecer a curto ou longo prazo. O stress prolongado pode ser prejudicial uma vez que afeta o corpo, negativamente, de várias formas, contribuindo assim para problemas de saúde mental e física. Embora o stress não seja simples de identificar corretamente, existem várias abordagens estudadas que solidificam a existência de uma correlação entre o stress e as características humanas percetíveis. De forma a detetar o stress, existem várias abordagens que podem ser tidas em consideração. No entanto, esta tarefa é mais difícil em ambientes não controlados e onde são necessários métodos não invasivos. A variabilidade da frequência cardíaca (HRV), expressões faciais, piscar de olhos e diâmetro da pupila e PERCLOS (fecho ocular percentual) consistem em abordagens não-invasivas, comprovadamente capazes de identificar o stress nas pessoas. Para este projeto, os dados fisiológicos dos utilizadores são recolhidos a partir de uma aplicação externa baseada em vídeo, de forma não invasiva. Além disso, serão também utilizados dados recolhidos a partir de um breve questionário para complementar os dados fisiológicos Após a implementação e teste da solução proposta, concluiu-se que o melhor algoritmo de deteção de stress foi o random forest classifier, que conseguiu obter um resultado final de 84,04% de precision, com 94,89% de recall e 87,88% de f1 score. Esta solução utiliza dados de HRV, expressões faciais, PERCLOS e certas características pessoais do utilizado

    Ubiquitous Technologies for Emotion Recognition

    Get PDF
    Emotions play a very important role in how we think and behave. As such, the emotions we feel every day can compel us to act and influence the decisions and plans we make about our lives. Being able to measure, analyze, and better comprehend how or why our emotions may change is thus of much relevance to understand human behavior and its consequences. Despite the great efforts made in the past in the study of human emotions, it is only now, with the advent of wearable, mobile, and ubiquitous technologies, that we can aim to sense and recognize emotions, continuously and in real time. This book brings together the latest experiences, findings, and developments regarding ubiquitous sensing, modeling, and the recognition of human emotions

    Employing consumer electronic devices in physiological and emotional evaluation of common driving activities

    Get PDF
    It is important to equip future vehicles with an on-board system capable of tracking and analysing driver state in real-time in order to mitigate the risk of human error occurrence in manual or semi-autonomous driving. This study aims to provide some supporting evidence for adoption of consumer grade electronic devices in driver state monitoring. The study adopted repeated measure design and was performed in high- fidelity driving simulator. Total of 39 participants of mixed age and gender have taken part in the user trials. The mobile application was developed to demonstrate how a mobile device can act as a host for a driver state monitoring system, support connectivity, synchronisation, and storage of driver state related measures from multiple devices. The results of this study showed that multiple physiological measures, sourced from consumer grade electronic devices, can be used to successfully distinguish task complexities across common driving activities. For instance, galvanic skin response and some heart rate derivatives were found to be correlated to overall subjective workload ratings. Furthermore, emotions were captured and showed to be affected by extreme driving situations

    Methods and techniques for analyzing human factors facets on drivers

    Get PDF
    Mención Internacional en el título de doctorWith millions of cars moving daily, driving is the most performed activity worldwide. Unfortunately, according to the World Health Organization (WHO), every year, around 1.35 million people worldwide die from road traffic accidents and, in addition, between 20 and 50 million people are injured, placing road traffic accidents as the second leading cause of death among people between the ages of 5 and 29. According to WHO, human errors, such as speeding, driving under the influence of drugs, fatigue, or distractions at the wheel, are the underlying cause of most road accidents. Global reports on road safety such as "Road safety in the European Union. Trends, statistics, and main challenges" prepared by the European Commission in 2018 presented a statistical analysis that related road accident mortality rates and periods segmented by hours and days of the week. This report revealed that the highest incidence of mortality occurs regularly in the afternoons during working days, coinciding with the period when the volume of traffic increases and when any human error is much more likely to cause a traffic accident. Accordingly, mitigating human errors in driving is a challenge, and there is currently a growing trend in the proposal for technological solutions intended to integrate driver information into advanced driving systems to improve driver performance and ergonomics. The study of human factors in the field of driving is a multidisciplinary field in which several areas of knowledge converge, among which stand out psychology, physiology, instrumentation, signal treatment, machine learning, the integration of information and communication technologies (ICTs), and the design of human-machine communication interfaces. The main objective of this thesis is to exploit knowledge related to the different facets of human factors in the field of driving. Specific objectives include identifying tasks related to driving, the detection of unfavorable cognitive states in the driver, such as stress, and, transversely, the proposal for an architecture for the integration and coordination of driver monitoring systems with other active safety systems. It should be noted that the specific objectives address the critical aspects in each of the issues to be addressed. Identifying driving-related tasks is one of the primary aspects of the conceptual framework of driver modeling. Identifying maneuvers that a driver performs requires training beforehand a model with examples of each maneuver to be identified. To this end, a methodology was established to form a data set in which a relationship is established between the handling of the driving controls (steering wheel, pedals, gear lever, and turn indicators) and a series of adequately identified maneuvers. This methodology consisted of designing different driving scenarios in a realistic driving simulator for each type of maneuver, including stop, overtaking, turns, and specific maneuvers such as U-turn and three-point turn. From the perspective of detecting unfavorable cognitive states in the driver, stress can damage cognitive faculties, causing failures in the decision-making process. Physiological signals such as measurements derived from the heart rhythm or the change of electrical properties of the skin are reliable indicators when assessing whether a person is going through an episode of acute stress. However, the detection of stress patterns is still an open problem. Despite advances in sensor design for the non-invasive collection of physiological signals, certain factors prevent reaching models capable of detecting stress patterns in any subject. This thesis addresses two aspects of stress detection: the collection of physiological values during stress elicitation through laboratory techniques such as the Stroop effect and driving tests; and the detection of stress by designing a process flow based on unsupervised learning techniques, delving into the problems associated with the variability of intra- and inter-individual physiological measures that prevent the achievement of generalist models. Finally, in addition to developing models that address the different aspects of monitoring, the orchestration of monitoring systems and active safety systems is a transversal and essential aspect in improving safety, ergonomics, and driving experience. Both from the perspective of integration into test platforms and integration into final systems, the problem of deploying multiple active safety systems lies in the adoption of monolithic models where the system-specific functionality is run in isolation, without considering aspects such as cooperation and interoperability with other safety systems. This thesis addresses the problem of the development of more complex systems where monitoring systems condition the operability of multiple active safety systems. To this end, a mediation architecture is proposed to coordinate the reception and delivery of data flows generated by the various systems involved, including external sensors (lasers, external cameras), cabin sensors (cameras, smartwatches), detection models, deliberative models, delivery systems and machine-human communication interfaces. Ontology-based data modeling plays a crucial role in structuring all this information and consolidating the semantic representation of the driving scene, thus allowing the development of models based on data fusion.I would like to thank the Ministry of Economy and Competitiveness for granting me the predoctoral fellowship BES-2016-078143 corresponding to the project TRA2015-63708-R, which provided me the opportunity of conducting all my Ph. D activities, including completing an international internship.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: José María Armingol Moreno.- Secretario: Felipe Jiménez Alonso.- Vocal: Luis Mart

    Sistema de reconhecimento de expressões faciais para deteção de stress

    Get PDF
    Stress is the body's natural reaction to external and internal stimuli. Despite being something natural, prolonged exposure to stressors can contribute to serious health problems. These reactions are reflected not only physiologically, but also psychologically, translating into emotions and facial expressions. Once this relationship between the experience of stressful situations and the demonstration of certain emotions in response was understood, it was decided to develop a system capable of classifying facial expressions and thereby creating a stress detector. The proposed solution consists of two main blocks. A convolutional neural network capable of classifying facial expressions, and an application that uses this model to classify real-time images of the user's face and thereby verify whether or not it shows signs of stress. The application consists in capturing real-time images from the webcam, extract the user's face, classify which facial expression he expresses, and with these classifications assess whether or not he shows signs of stress in a given time interval. As soon as the application determines the presence of signs of stress, it notifies the user. For the creation of the classification model, was used transfer learning, together with finetuning. In this way, we took advantage of the pre-trained networks VGG16, VGG19, and Inception-ResNet V2 to solve the problem at hand. For the transfer learning process, were also tried two classifier architectures. After several experiments, it was determined that VGG16, together with a classifier made up of a convolutional layer, was the candidate with the best performance at classifying stressful emotions. Having presented an MCC of 0.8969 in the test images of the KDEF dataset, 0.5551 in the Net Images dataset, and 0.4250 in the CK +.O stress é uma reação natural do corpo a estímulos externos e internos. Apesar de ser algo natural, a exposição prolongada a stressors pode contribuir para sérios problemas de saúde. Essas reações refletem-se não só fisiologicamente, mas também psicologicamente. Traduzindose em emoções e expressões faciais. Uma vez compreendida esta relação entre a experiência de situações stressantes e a demonstração de determinadas emoções como resposta, decidiu-se desenvolver um sistema capaz de classificar expressões faciais e com isso criar um detetor de stress. A solução proposta é constituida por dois blocos fundamentais. Uma rede neuronal convolucional capaz de classificar expressões faciais e uma aplicação que utiliza esse modelo para classificar imagens em tempo real do rosto do utilizador e assim averiguar se este apresenta ou não sinais de stress. A aplicação consiste em captar imagens em tempo real a partir da webcam, extrair o rosto do utilizador, classificar qual a expressão facial que este manifesta, e com essas classificações avaliar se num determinado intervalo temporal este apresenta ou não sinais de stress. Assim que a aplicação determine a presença de sinais de stress, esta irá notificar o utilizador. Para a criação do modelo de classificação, foi utilizado transfer learning, juntamente com finetuning. Desta forma tirou-se partido das redes pre-treinadas VGG16, VGG19, e InceptionResNet V2 para a resolução do problema em mãos. Para o processo de transfer learning foram também experimentadas duas arquiteturas de classificadores. Após várias experiências, determinou-se que a VGG16, juntamente com um classificador constituido por uma camada convolucional era a candidata com melhor desempenho a classificar emoções stressantes. Tendo apresentado um MCC de 0,8969 nas imagens de teste do conjunto de dados KDEF, 0,5551 no conjunto de dados Net Images, e 0,4250 no CK+

    On driver behavior recognition for increased safety:A roadmap

    Get PDF
    Advanced Driver-Assistance Systems (ADASs) are used for increasing safety in the automotive domain, yet current ADASs notably operate without taking into account drivers’ states, e.g., whether she/he is emotionally apt to drive. In this paper, we first review the state-of-the-art of emotional and cognitive analysis for ADAS: we consider psychological models, the sensors needed for capturing physiological signals, and the typical algorithms used for human emotion classification. Our investigation highlights a lack of advanced Driver Monitoring Systems (DMSs) for ADASs, which could increase driving quality and security for both drivers and passengers. We then provide our view on a novel perception architecture for driver monitoring, built around the concept of Driver Complex State (DCS). DCS relies on multiple non-obtrusive sensors and Artificial Intelligence (AI) for uncovering the driver state and uses it to implement innovative Human–Machine Interface (HMI) functionalities. This concept will be implemented and validated in the recently EU-funded NextPerception project, which is briefly introduced
    corecore