5 research outputs found

    Action Units and Their Cross-Correlations for Prediction of Cognitive Load during Driving

    Get PDF
    Driving requires the constant coordination of many body systems and full attention of the person. Cognitive distraction (subsidiary mental load) of the driver is an important factor that decreases attention and responsiveness, which may result in human error and accidents. In this paper, we present a study of facial expressions of such mental diversion of attention. First, we introduce a multi-camera database of 46 people recorded while driving a simulator in two conditions, baseline and induced cognitive load using a secondary task. Then, we present an automatic system to differentiate between the two conditions, where we use features extracted from Facial Action Unit (AU) values and their cross-correlations in order to exploit recurring synchronization and causality patterns. Both the recording and detection system are suitable for integration in a vehicle and a real-world application, e.g. an early warning system. We show that when the system is trained individually on each subject we achieve a mean accuracy and F-score of ~95%, and for the subject independent tests ~68% accuracy and ~66% F-score, with person-specific normalization to handle subject-dependency. Based on the results, we discuss the universality of the facial expressions of such states and possible real-world uses of the system

    Statistical modelling for facial expression dynamics

    Get PDF
    PhDOne of the most powerful and fastest means of relaying emotions between humans are facial expressions. The ability to capture, understand and mimic those emotions and their underlying dynamics in the synthetic counterpart is a challenging task because of the complexity of human emotions, different ways of conveying them, non-linearities caused by facial feature and head motion, and the ever critical eye of the viewer. This thesis sets out to address some of the limitations of existing techniques by investigating three components of expression modelling and parameterisation framework: (1) Feature and expression manifold representation, (2) Pose estimation, and (3) Expression dynamics modelling and their parameterisation for the purpose of driving a synthetic head avatar. First, we introduce a hierarchical representation based on the Point Distribution Model (PDM). Holistic representations imply that non-linearities caused by the motion of facial features, and intrafeature correlations are implicitly embedded and hence have to be accounted for in the resulting expression space. Also such representations require large training datasets to account for all possible variations. To address those shortcomings, and to provide a basis for learning more subtle, localised variations, our representation consists of tree-like structure where a holistic root component is decomposed into leaves containing the jaw outline, each of the eye and eyebrows and the mouth. Each of the hierarchical components is modelled according to its intrinsic functionality, rather than the final, holistic expression label. Secondly, we introduce a statistical approach for capturing an underlying low-dimension expression manifold by utilising components of the previously defined hierarchical representation. As Principal Component Analysis (PCA) based approaches cannot reliably capture variations caused by large facial feature changes because of its linear nature, the underlying dynamics manifold for each of the hierarchical components is modelled using a Hierarchical Latent Variable Model (HLVM) approach. Whilst retaining PCA properties, such a model introduces a probability density model which can deal with missing or incomplete data and allows discovery of internal within cluster structures. All of the model parameters and underlying density model are automatically estimated during the training stage. We investigate the usefulness of such a model to larger and unseen datasets. Thirdly, we extend the concept of HLVM model to pose estimation to address the non-linear shape deformations and definition of the plausible pose space caused by large head motion. Since our head rarely stays still, and its movements are intrinsically connected with the way we perceive and understand the expressions, pose information is an integral part of their dynamics. The proposed 3 approach integrates into our existing hierarchical representation model. It is learned using sparse and discreetly sampled training dataset, and generalises to a larger and continuous view-sphere. Finally, we introduce a framework that models and extracts expression dynamics. In existing frameworks, explicit definition of expression intensity and pose information, is often overlooked, although usually implicitly embedded in the underlying representation. We investigate modelling of the expression dynamics based on use of static information only, and focus on its sufficiency for the task at hand. We compare a rule-based method that utilises the existing latent structure and provides a fusion of different components with holistic and Bayesian Network (BN) approaches. An Active Appearance Model (AAM) based tracker is used to extract relevant information from input sequences. Such information is subsequently used to define the parametric structure of the underlying expression dynamics. We demonstrate that such information can be utilised to animate a synthetic head avatar. Submitte

    Investigating Tafheet as a Unique Driving Style Behaviour

    Get PDF
    Road safety has become a major concern due to the increased rate of deaths caused by road accidents. For this purpose, intelligent transportation systems are being developed to reduce the number of fatalities on the road. A plethora of work has been undertaken on the detection of different styles of behaviour such as fatigue and drunken behaviour of the drivers; however, owing to complexity of human behaviour, a lot has yet to be explored in this field to assess different styles of the abnormal behaviour to make roads safer for travelling. This research focuses on detection of a very complex driver’s behaviours: ‘tafheet’, reckless and aggressive by proposing and building a driver’s behaviour detection model in the context-aware system in the VANET environment. Tafheet behaviour is very complex behaviour shown by young drivers in the Middle East, Japan and the USA. It is characterised by driving at dangerously high speeds (beyond those commonly known in aggressive behaviour) coupled with the drifting and angular movements of the wheels of the vehicle, which is similarly aggressive and reckless driving behaviour. Thus, the dynamic Bayesian Network (DBN) framework was applied to perform reasoning relating to the uncertainty associated with driver’s behaviour and to deduce the possible combinations of the driver’s behaviour based on the information gathered by the system about the foregoing factors. Based on the concept of context-awareness, a novel Tafheet driver’s behaviour detection architecture had been built in this thesis, which had been separated into three phases: sensing phase, processing and thinking phase and the acting phase. The proposed system elaborated the interactions of various components of the architecture with each other in order to detect the required outcomes from it. The implementation of this proposed system was executed using GeNIe 2.0 software, resulting in the construction of DBN model. The DBN model was evaluated by using experimental set of data in order to substantiate its functionality and accuracy in terms of detection of tafheet, reckless and aggressive behaviours in the real time manner. It was shown that the proposed system was able to detect the selected abnormal behaviours of the driver based on the contextual data collected. The novelty of this system was that it could detect the reckless, aggressive and tafheet behaviour in sequential manner, based on the intensity of the driver’s behaviour itself. In contrast to previous detection model, this research work suggested the On Board Unit architecture for the arrangement of sensors and data processing and decision making of the proposed system, which can be used to pre-infer the complex behaviour like tafheet. Thus it has the potential to prevent the road accidents from happening due to tafheet behaviour

    Affective Computing

    Get PDF
    This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing
    corecore