390 research outputs found

    Discrimination of moderate and acute drowsiness based on spontaneous facial expressions

    Get PDF
    It is important for drowsiness detection systems to identify different levels of drowsiness and respond appropriately at each level. This study explores how to discriminate moderate from acute drowsiness by applying computer vision techniques to the human face. In our previous study, spontaneous facial expressions measured through computer vision techniques were used as an indicator to discriminate alert from acutely drowsy episodes. In this study we are exploring which facial muscle movements are predictive of moderate and acute drowsiness. The effect of temporal dynamics of action units on prediction performances is explored by capturing temporal dynamics using an overcomplete representation of temporal Gabor Filters. In the final system we perform feature selection to build a classifier that can discriminate moderate drowsy from acute drowsy episodes. The system achieves a classification rate of .96 A’ in discriminating moderately drowsy versus acutely drowsy episodes. Moreover the study reveals new information in facial behavior occurring during different stages of drowsiness

    Video based detection of driver fatigue

    Get PDF
    This thesis addresses the problem of drowsy driver detection using computer vision techniques applied to the human face. Specifically we explore the possibility of discriminating drowsy from alert video segments using facial expressions automatically extracted from video. Several approaches were previously proposed for the detection and prediction of drowsiness. There has recently been increasing interest in computer vision approaches as it is a potentially promising approach due to its non-invasive nature for detecting drowsiness. Previous studies with vision based approaches detect driver drowsiness primarily by making pre-assumptions about the relevant behavior, focusing on blink rate, eye closure, and yawning. Here we employ machine learning to explore, understand and exploit actual human behavior during drowsiness episodes. We have collected two datasets including facial and head movement measures. Head motion is collected through an accelerometer for the first dataset (UYAN-1) and an automatic video based head pose detector for the second dataset (UYAN-2). We use outputs of the automatic classifiers of the facial action coding system (FACS) for detecting drowsiness. These facial actions include blinking and yawn motions, as well as a number of other facial movements. These measures are passed to a learning-based classifier based on multinomial logistic regression. In UYAN-1 the system is able to predict sleep and crash episodes during a driving computer game with 0.98 performance area under the receiver operator characteristic curve for across subjects tests. This is the highest prediction rate reported to date for detecting real drowsiness. Moreover, the analysis reveals new information about human facial behavior during drowsy driving. In UYAN-2 fine discrimination of drowsy states are also explored on a separate dataset. The degree to which individual facial action units can predict the difference between moderately drowsy to acutely drowsy is studied. Signal processing techniques and machine learning methods are employed to build a person independent acute drowsiness detection system. Temporal dynamics are captured using a bank of temporal filters. Individual action unit predictive power is explored with an MLR based classifier. Best performing five action units have been determined for a person independent system. The system is able to obtain 0.96 performance of area under the receiver operator characteristic curve for a more challenging dataset with the combined features of the best performing 5 action units. Moreover the analysis reveals new markers for different levels of drowsiness

    Extraction and selection of muscle based features for facial expression recognition

    Get PDF
    In this study we propose a new set of muscle activity based features for facial expression recognition. We extract muscular activities by observing the displacements of facial feature points in an expression video. The facial feature points are initialized on muscular regions of influence in the first frame of the video. These points are tracked through optical flow in sequential frames. Displacements of feature points on the image plane are used to estimate the 3D orientation of a head model and relative displacements of its vertices. We model the human skin as a linear system of equations. The estimated deformation of the wireframe model produces an over-determined system of equations that can be solved under the constraint of the facial anatomy to obtain muscle activation levels. We apply sequential forward feature selection to choose the most descriptive set of muscles for recognition of basic facial expressions.Publisher's VersionAuthor Post Prin

    Ubiquitous Technologies for Emotion Recognition

    Get PDF
    Emotions play a very important role in how we think and behave. As such, the emotions we feel every day can compel us to act and influence the decisions and plans we make about our lives. Being able to measure, analyze, and better comprehend how or why our emotions may change is thus of much relevance to understand human behavior and its consequences. Despite the great efforts made in the past in the study of human emotions, it is only now, with the advent of wearable, mobile, and ubiquitous technologies, that we can aim to sense and recognize emotions, continuously and in real time. This book brings together the latest experiences, findings, and developments regarding ubiquitous sensing, modeling, and the recognition of human emotions

    Automatic recognition of micro-expressions using local binary patterns on three orthogonal planes and extreme learning machine

    Get PDF
    A dissertation submitted in fullment of the requirements for the degree of Master of Science to the Faculty of Science, University of the Witwatersrand, Johannesburg, September 2017Recognition of micro-expressions is a growing research area as a result of its application in revealing subtle intention of humans especially under high stake situations. Owing to micro-expressions' short duration and low inten- sity, e orts to train humans in their recognition has resulted in very low performance. The use of temporal methods (on image sequences) and static methods (on apex frames) were explored for feature extraction. Supervised machine learning algorithms which include Support Vector Machines (SVM) and Extreme Learning Machines (ELM) were used for the purpose of classi- cation. Extreme learning machines which has the ability to learn fast was compared with SVM which acted as the baseline model. For experimentation, samples from Chinese Academy of Micro-expressions (CASME II) database were used. Results revealed that use of temporal features outperformed the use of static features for micro-expression recognition on both SVM and ELM models. Static and temporal features gave an average testing accuracy of 94.08% and 97.57% respectively for ve classes of micro-expressions us- ing ELM model. Signi cance test carried out on these two average means suggested that temporal features outperformed static features using ELM. Comparison between SVM and ELM learning time also revealed that ELM learns faster than SVM. For the ve selected micro-expression classes, an av- erage training time of 0.3405 seconds was achieved for SVM while an average training time of 0.0409 seconds was achieved for ELM. Hence we can sug- gest that micro-expressions can be recognised successfully by using temporal features and a machine learning algorithm that has a fast learning speed.MT201

    Sensor Technologies to Manage the Physiological Traits of Chronic Pain: A Review

    Get PDF
    Non-oncologic chronic pain is a common high-morbidity impairment worldwide and acknowledged as a condition with significant incidence on quality of life. Pain intensity is largely perceived as a subjective experience, what makes challenging its objective measurement. However, the physiological traces of pain make possible its correlation with vital signs, such as heart rate variability, skin conductance, electromyogram, etc., or health performance metrics derived from daily activity monitoring or facial expressions, which can be acquired with diverse sensor technologies and multisensory approaches. As the assessment and management of pain are essential issues for a wide range of clinical disorders and treatments, this paper reviews different sensor-based approaches applied to the objective evaluation of non-oncological chronic pain. The space of available technologies and resources aimed at pain assessment represent a diversified set of alternatives that can be exploited to address the multidimensional nature of pain.Ministerio de Economía y Competitividad (Instituto de Salud Carlos III) PI15/00306Junta de Andalucía PIN-0394-2017Unión Europea "FRAIL

    Emotional expressions reconsidered: challenges to inferring emotion from human facial movements

    Get PDF
    It is commonly assumed that a person’s emotional state can be readily inferred from his or her facial movements, typically called emotional expressions or facial expressions. This assumption influences legal judgments, policy decisions, national security protocols, and educational practices; guides the diagnosis and treatment of psychiatric illness, as well as the development of commercial applications; and pervades everyday social interactions as well as research in other scientific fields such as artificial intelligence, neuroscience, and computer vision. In this article, we survey examples of this widespread assumption, which we refer to as the common view, and we then examine the scientific evidence that tests this view, focusing on the six most popular emotion categories used by consumers of emotion research: anger, disgust, fear, happiness, sadness, and surprise. The available scientific evidence suggests that people do sometimes smile when happy, frown when sad, scowl when angry, and so on, as proposed by the common view, more than what would be expected by chance. Yet how people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation. Furthermore, similar configurations of facial movements variably express instances of more than one emotion category. In fact, a given configuration of facial movements, such as a scowl, often communicates something other than an emotional state. Scientists agree that facial movements convey a range of information and are important for social communication, emotional or otherwise. But our review suggests an urgent need for research that examines how people actually move their faces to express emotions and other social information in the variety of contexts that make up everyday life, as well as careful study of the mechanisms by which people perceive instances of emotion in one another. We make specific research recommendations that will yield a more valid picture of how people move their faces to express emotions and how they infer emotional meaning from facial movements in situations of everyday life. This research is crucial to provide consumers of emotion research with the translational information they require

    Studies of cerebral palsy in the childhood population of Edinburgh

    Get PDF
    This thesis is the result of an investigation of the prevalence, clinical findings and aetiology of cerebral palsy in the childhood population of Edinburgh which was carried out during 1952 and 1953, whilst the author held a George Guthrie Research Fellowship from the University of Edinburgh. The aims of the investigation were, firstly to establish the prevalence of cerebral palsy in the childhood population of the city; secondly to study the clinical features of cerebral palsy and their effects on the patient's way of life; to define some of the important aetiological factors in cerebral palsy in a representative group of children in the community. During the investigation it became increasingly apparent that the currently defined categories included in "Cerebral 'Palsy" did not allow for an accurate classification of cases by neurological findings. Eventually a new classification on the basis of neurological syndromes was evolved. This classification will be described and compared to previous classifications in Section 3. It was possible to establish figures for the prevalence of cerebral palsy in the childhood population of Edinburgh, though a complete ascertainment of all patients was not made. The clinical features of cerebral palsy in the childhood ;community were studied and are described in Section 4. During the survey it became increasingly apparent that "Cerebral Palsy" was no clinical entity. Rather it comprised a number of neurological disorders in which the only common factor appeared to be that there was motor dysfunction due to abnormality of the brain which was present in early life. The clinical features varied widely from category to category. The ways in which patients were handicapped and the extent to which they were printed from taking part in everyday activities were very different. A detailed study was made of the clinical findings and handicaps of patients and they were compared to those described by previous authors. Thus, some idea of the importance of cerebral palsy in the community was obtained, (Section 5). Aetiological factors which were important in one form of cerebral palsy were found to be much less important in others. Many different "causes" of cerebral palsy were found which varied from developmental malformation to traumatic head injury, and from abnormal parturition to the complications of infectious diseases in early life. The multiplicity of aetiological factors in single categories and even single patients was impressive. For example, within the category of "Ataxic Diplegia" patients were found whose disorder appeared to be genetically determined, and patients who were suffering from the effects of birth injury, parainfectious encephalomylitis or meningitis. To take account of the multiplicity of aetiological factors it was necessary to study the heredity and social backgrounds of patients as well as their individual' birth and later histories. The current concept of cerebral palsy as being due predominantly to the effects of birth injury is a misleading simplification of the true position. In the same way as there are many different causes of stillbirth and infant death, !so there are many causes of cerebral palsy in children who survive. The later sections of this thesis are concerned with demonstrating that the aetiological factors in cerebral palsy are as complex as those involved in infant mortality. Social, genetic, obstetric and many unknown factors play a .part. An attempt has been made to define the importance of some of them in Sections 5 and 6

    Sensing with Earables: A Systematic Literature Review and Taxonomy of Phenomena

    Get PDF
    Earables have emerged as a unique platform for ubiquitous computing by augmenting ear-worn devices with state-of-the-art sensing. This new platform has spurred a wealth of new research exploring what can be detected on a wearable, small form factor. As a sensing platform, the ears are less susceptible to motion artifacts and are located in close proximity to a number of important anatomical structures including the brain, blood vessels, and facial muscles which reveal a wealth of information. They can be easily reached by the hands and the ear canal itself is affected by mouth, face, and head movements. We have conducted a systematic literature review of 271 earable publications from the ACM and IEEE libraries. These were synthesized into an open-ended taxonomy of 47 different phenomena that can be sensed in, on, or around the ear. Through analysis, we identify 13 fundamental phenomena from which all other phenomena can be derived, and discuss the different sensors and sensing principles used to detect them. We comprehensively review the phenomena in four main areas of (i) physiological monitoring and health, (ii) movement and activity, (iii) interaction, and (iv) authentication and identification. This breadth highlights the potential that earables have to offer as a ubiquitous, general-purpose platform

    Sensing with Earables: A Systematic Literature Review and Taxonomy of Phenomena

    Get PDF
    Earables have emerged as a unique platform for ubiquitous computing by augmenting ear-worn devices with state-of-the-art sensing. This new platform has spurred a wealth of new research exploring what can be detected on a wearable, small form factor. As a sensing platform, the ears are less susceptible to motion artifacts and are located in close proximity to a number of important anatomical structures including the brain, blood vessels, and facial muscles which reveal a wealth of information. They can be easily reached by the hands and the ear canal itself is affected by mouth, face, and head movements. We have conducted a systematic literature review of 271 earable publications from the ACM and IEEE libraries. These were synthesized into an open-ended taxonomy of 47 different phenomena that can be sensed in, on, or around the ear. Through analysis, we identify 13 fundamental phenomena from which all other phenomena can be derived, and discuss the different sensors and sensing principles used to detect them. We comprehensively review the phenomena in four main areas of (i) physiological monitoring and health, (ii) movement and activity, (iii) interaction, and (iv) authentication and identification. This breadth highlights the potential that earables have to offer as a ubiquitous, general-purpose platform
    corecore