89,439 research outputs found

    Drivers’ behaviour modelling for virtual worlds

    Get PDF
    In this paper we present a study that looks at modelling drivers’ behaviour with a view to contribute to the problem of road rage. The approach we adopt is based on agent technology, particularly multi-agent systems. Each driver is represented by a software agent. A virtual environment is used to simulate drivers’ behaviour, thus enabling us to observe the conditions leading to road rage. The simulated model is then used to suggest possible ways of alleviating this societal problem. Our agents are equipped with an emotional module which will make their behaviours more human-like. For this, we propose a computational emotion model based on the OCC model and probabilistic cognitive maps. The key influencing factors that are included in the model are personality, emotions and some social/personal attributes

    CogEmoNet: A cognitive-feature-augmented driver emotion recognition model for smart cockpit

    Get PDF
    Driver's emotion recognition is vital to improving driving safety, comfort, and acceptance of intelligent vehicles. This article presents a cognitive-feature-augmented driver emotion detection method that is based on emotional cognitive process theory and deep networks. Different from the traditional methods, both the driver's facial expression and cognitive process characteristics (age, gender, and driving age) were used as the inputs of the proposed model. Convolutional techniques were adopted to construct the model for driver's emotion detection simultaneously considering the driver's facial expression and cognitive process characteristics. A driver's emotion data collection was carried out to validate the performance of the proposed method. The collected dataset consists of 40 drivers' frontal facial videos, their cognitive process characteristics, and self-reported assessments of driver emotions. Another two deep networks were also used to compare recognition performance. The results prove that the proposed method can achieve well detection results for different databases on the discrete emotion model and dimensional emotion model, respectively

    Adaptive 3D facial action intensity estimation and emotion recognition

    Get PDF
    Automatic recognition of facial emotion has been widely studied for various computer vision tasks (e.g. health monitoring, driver state surveillance and personalized learning). Most existing facial emotion recognition systems, however, either have not fully considered subject-independent dynamic features or were limited to 2D models, thus are not robust enough for real-life recognition tasks with subject variation, head movement and illumination change. Moreover, there is also lack of systematic research on effective newly arrived novel emotion class detection. To address these challenges, we present a real-time 3D facial Action Unit (AU) intensity estimation and emotion recognition system. It automatically selects 16 motion-based facial feature sets using minimal-redundancy–maximal-relevance criterion based optimization and estimates the intensities of 16 diagnostic AUs using feedforward Neural Networks and Support Vector Regressors. We also propose a set of six novel adaptive ensemble classifiers for robust classification of the six basic emotions and the detection of newly arrived unseen novel emotion classes (emotions that are not included in the training set). A distance-based clustering and uncertainty measures of the base classifiers within each ensemble model are used to inform the novel class detection. Evaluated with the Bosphorus 3D database, the system has achieved the best performance of 0.071 overall Mean Squared Error (MSE) for AU intensity estimation using Support Vector Regressors, and 92.2% average accuracy for the recognition of the six basic emotions using the proposed ensemble classifiers. In comparison with other related work, our research outperforms other state-of-the-art research on 3D facial emotion recognition for the Bosphorus database. Moreover, in on-line real-time evaluation with real human subjects, the proposed system also shows superior real-time performance with 84% recognition accuracy and great flexibility and adaptation for newly arrived novel (e.g. ‘contempt’ which is not included in the six basic emotions) emotion detection

    Examining the Determinants of Mobile Location-based Services’ Continuance

    Get PDF
    The continuance of use is an important topic of IS research. However, in the past, many researchers have focused on adoption rather than IS continuance. Studying continuance is of equal importance, because if use does not persist, this may limit the revenues of the provider. This is particularly true for consumer-oriented services, which rely on advertising, or subscription-based revenue models. In this paper, we investigate the determinants of location-based services (LBS) continuance as a relevant case study for the examination of IS continuance generally. A research model is developed and empirically tested through a survey of a representative sample in Germany. The proposed model builds on and extends the Limayem et al. model of IS continuance. Our analysis highlights the importance of habit and emotion in LBS continuance. The results indicate that habit has a stronger predictive power than continuance intentions for LBS continuance and that emotions are an important driver for user satisfaction with LBS

    A novel driver emotion recognition system based on deep ensemble classification

    Get PDF
    Driver emotion classification is an important topic that can raise awareness of driving habits because many drivers are overconfident and unaware of their bad driving habits. Drivers will acquire insight into their poor driving behaviors and be better able to avoid future accidents if their behavior is automatically identified. In this paper, we use different models such as convolutional neural networks, recurrent neural networks, and multi-layer perceptron classification models to construct an ensemble convolutional neural network-based enhanced driver facial expression recognition model. First, the faces of the drivers are discovered using the faster region-based convolutional neural network (R-CNN) model, which can recognize faces in real-time and offline video reliably and effectively. The feature-fusing technique is utilized to integrate the features extracted from three CNN models, and the fused features are then used to train the suggested ensemble classification model. To increase the accuracy and efficiency of face detection, a new convolutional neural network block (InceptionV3) replaces the improved Faster R-CNN feature-learning block. To evaluate the proposed face detection and driver facial expression recognition (DFER) datasets, we achieved an accuracy of 98.01%, 99.53%, 99.27%, 96.81%, and 99.90% on the JAFFE, CK+, FER-2013, AffectNet, and custom-developed datasets, respectively. The custom-developed dataset has been recorded as the best among all under the simulation environment

    Examining the effects of emotional valence and arousal on takeover performance in conditionally automated driving

    Get PDF
    In conditionally automated driving, drivers have difficulty in takeover transitions as they become increasingly decoupled from the operational level of driving. Factors influencing takeover performance, such as takeover lead time and the engagement of non-driving-related tasks, have been studied in the past. However, despite the important role emotions play in human-machine interaction and in manual driving, little is known about how emotions influence drivers’ takeover performance. This study, therefore, examined the effects of emotional valence and arousal on drivers’ takeover timeliness and quality in conditionally automated driving. We conducted a driving simulation experiment with 32 participants. Movie clips were played for emotion induction. Participants with different levels of emotional valence and arousal were required to take over control from automated driving, and their takeover time and quality were analyzed. Results indicate that positive valence led to better takeover quality in the form of a smaller maximum resulting acceleration and a smaller maximum resulting jerk. However, high arousal did not yield an advantage in takeover time. This study contributes to the literature by demonstrating how emotional valence and arousal affect takeover performance. The benefits of positive emotions carry over from manual driving to conditionally automated driving while the benefits of arousal do not
    • …
    corecore