1,127 research outputs found

    Automatic detection of a driver’s complex mental states

    Get PDF
    Automatic classification of drivers’ mental states is an important yet relatively unexplored topic. In this paper, we define a taxonomy of a set of complex mental states that are relevant to driving, namely: Happy, Bothered, Concentrated and Confused. We present our video segmentation and annotation methodology of a spontaneous dataset of natural driving videos from 10 different drivers. We also present our real-time annotation tool used for labelling the dataset via an emotion perception experiment and discuss the challenges faced in obtaining the ground truth labels. Finally, we present a methodology for automatic classification of drivers’ mental states. We compare SVM models trained on our dataset with an existing nearest neighbour model pre-trained on posed dataset, using facial Action Units as input features. We demonstrate that our temporal SVM approach yields better results. The dataset’s extracted features and validated emotion labels, together with the annotation tool, will be made available to the research community

    Speaker-independent emotion recognition exploiting a psychologically-inspired binary cascade classification schema

    No full text
    In this paper, a psychologically-inspired binary cascade classification schema is proposed for speech emotion recognition. Performance is enhanced because commonly confused pairs of emotions are distinguishable from one another. Extracted features are related to statistics of pitch, formants, and energy contours, as well as spectrum, cepstrum, perceptual and temporal features, autocorrelation, MPEG-7 descriptors, Fujisakis model parameters, voice quality, jitter, and shimmer. Selected features are fed as input to K nearest neighborhood classifier and to support vector machines. Two kernels are tested for the latter: Linear and Gaussian radial basis function. The recently proposed speaker-independent experimental protocol is tested on the Berlin emotional speech database for each gender separately. The best emotion recognition accuracy, achieved by support vector machines with linear kernel, equals 87.7%, outperforming state-of-the-art approaches. Statistical analysis is first carried out with respect to the classifiers error rates and then to evaluate the information expressed by the classifiers confusion matrices. © Springer Science+Business Media, LLC 2011

    Subjective and objective measures

    Get PDF
    One of the greatest challenges in the study of emotions and emotional states is their measurement. The techniques used to measure emotions depend essentially on the authors’ definition of the concept of emotion. Currently, two types of measures are used: subjective and objective. While subjective measures focus on assessing the conscious recognition of one’s own emotions, objective measures allow researchers to quantify and assess the conscious and unconscious emotional processes. In this sense, when the objective is to evaluate the emotional experience from the subjective point of view of an individual in relation to a given event, then subjective measures such as self-report should be used. In addition to this, when the objective is to evaluate the emotional experience at the most unconscious level of processes such as the physiological response, objective measures should be used. There are no better or worse measures, only measures that allow access to the same phenomenon from different points of view. The chapter’s main objective is to make a survey of the main measures of evaluation of the emotions and emotional states more relevant in the current scientific panorama.info:eu-repo/semantics/acceptedVersio

    Is gender encoded in the smile? A computational framework for the analysis of the smile driven dynamic face for gender recognition

    Get PDF
    YesAutomatic gender classification has become a topic of great interest to the visual computing research community in recent times. This is due to the fact that computer-based automatic gender recognition has multiple applications including, but not limited to, face perception, age, ethnicity, identity analysis, video surveillance and smart human computer interaction. In this paper, we discuss a machine learning approach for efficient identification of gender purely from the dynamics of a person’s smile. Thus, we show that the complex dynamics of a smile on someone’s face bear much relation to the person’s gender. To do this, we first formulate a computational framework that captures the dynamic characteristics of a smile. Our dynamic framework measures changes in the face during a smile using a set of spatial features on the overall face, the area of the mouth, the geometric flow around prominent parts of the face and a set of intrinsic features based on the dynamic geometry of the face. This enables us to extract 210 distinct dynamic smile parameters which form as the contributing features for machine learning. For machine classification, we have utilised both the Support Vector Machine and the k-Nearest Neighbour algorithms. To verify the accuracy of our approach, we have tested our algorithms on two databases, namely the CK+ and the MUG, consisting of a total of 109 subjects. As a result, using the k-NN algorithm, along with tenfold cross validation, for example, we achieve an accurate gender classification rate of over 85%. Hence, through the methodology we present here, we establish proof of the existence of strong indicators of gender dimorphism, purely in the dynamics of a person’s smile

    Analyzing musculoskeletal neck pain, measured as present pain and periods of pain, with three different regression models: a cohort study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In the literature there are discussions on the choice of outcome and the need for more longitudinal studies of musculoskeletal disorders. The general aim of this longitudinal study was to analyze musculoskeletal neck pain, in a group of young adults. Specific aims were to determine whether psychosocial factors, computer use, high work/study demands, and lifestyle are long-term or short-term factors for musculoskeletal neck pain, and whether these factors are important for developing or ongoing musculoskeletal neck pain.</p> <p>Methods</p> <p>Three regression models were used to analyze the different outcomes. Pain at present was analyzed with a marginal logistic model, for number of years with pain a Poisson regression model was used and for developing and ongoing pain a logistic model was used. Presented results are odds ratios and proportion ratios (logistic models) and rate ratios (Poisson model). The material consisted of web-based questionnaires answered by 1204 Swedish university students from a prospective cohort recruited in 2002.</p> <p>Results</p> <p>Perceived stress was a risk factor for pain at present (PR = 1.6), for developing pain (PR = 1.7) and for number of years with pain (RR = 1.3). High work/study demands was associated with pain at present (PR = 1.6); and with number of years with pain when the demands negatively affect home life (RR = 1.3). Computer use pattern (number of times/week with a computer session ≥ 4 h, without break) was a risk factor for developing pain (PR = 1.7), but also associated with pain at present (PR = 1.4) and number of years with pain (RR = 1.2). Among life style factors smoking (PR = 1.8) was found to be associated to pain at present. The difference between men and women in prevalence of musculoskeletal pain was confirmed in this study. It was smallest for the outcome ongoing pain (PR = 1.4) compared to pain at present (PR = 2.4) and developing pain (PR = 2.5).</p> <p>Conclusion</p> <p>By using different regression models different aspects of neck pain pattern could be addressed and the risk factors impact on pain pattern was identified. Short-term risk factors were perceived stress, high work/study demands and computer use pattern (break pattern). Those were also long-term risk factors. For developing pain perceived stress and computer use pattern were risk factors.</p

    Emotional cues enhance the attentional effects on spatial and temporal resolution

    Get PDF
    In the present study, we demonstrated that the emotional significance of a spatial cue enhances the effect of covert attention on spatial and temporal resolution (i.e., our ability to discriminate small spatial details and fast temporal flicker). Our results indicated that fearful face cues, as compared with neutral face cues, enhanced the attentional benefits in spatial resolution but also enhanced the attentional deficits in temporal resolution. Furthermore, we observed that the overall magnitudes of individuals’ attentional effects correlated strongly with the magnitude of the emotion × attention interaction effect. Combined, these findings provide strong support for the idea that emotion enhances the strength of a cue’s attentional response

    A fresh look at the evolution and diversification of photochemical reaction centers

    Get PDF
    In this review, I reexamine the origin and diversification of photochemical reaction centers based on the known phylogenetic relations of the core subunits, and with the aid of sequence and structural alignments. I show, for example, that the protein folds at the C-terminus of the D1 and D2 subunits of Photosystem II, which are essential for the coordination of the water-oxidizing complex, were already in place in the most ancestral Type II reaction center subunit. I then evaluate the evolution of reaction centers in the context of the rise and expansion of the different groups of bacteria based on recent large-scale phylogenetic analyses. I find that the Heliobacteriaceae family of Firmicutes appears to be the earliest branching of the known groups of phototrophic bacteria; however, the origin of photochemical reaction centers and chlorophyll synthesis cannot be placed in this group. Moreover, it becomes evident that the Acidobacteria and the Proteobacteria shared a more recent common phototrophic ancestor, and this is also likely for the Chloroflexi and the Cyanobacteria. Finally, I argue that the discrepancies among the phylogenies of the reaction center proteins, chlorophyll synthesis enzymes, and the species tree of bacteria are best explained if both types of photochemical reaction centers evolved before the diversification of the known phyla of phototrophic bacteria. The primordial phototrophic ancestor must have had both Type I and Type II reaction centers

    Mood Induction in Depressive Patients: A Comparative Multidimensional Approach

    Get PDF
    Anhedonia, reduced positive affect and enhanced negative affect are integral characteristics of major depressive disorder (MDD). Emotion dysregulation, e.g. in terms of different emotion processing deficits, has consistently been reported. The aim of the present study was to investigate mood changes in depressive patients using a multidimensional approach for the measurement of emotional reactivity to mood induction procedures. Experimentally, mood states can be altered using various mood induction procedures. The present study aimed at validating two different positive mood induction procedures in patients with MDD and investigating which procedure is more effective and applicable in detecting dysfunctions in MDD. The first procedure relied on the presentation of happy vs. neutral faces, while the second used funny vs. neutral cartoons. Emotional reactivity was assessed in 16 depressed and 16 healthy subjects using self-report measures, measurements of electrodermal activity and standardized analyses of facial responses. Positive mood induction was successful in both procedures according to subjective ratings in patients and controls. In the cartoon condition, however, a discrepancy between reduced facial activity and concurrently enhanced autonomous reactivity was found in patients. Relying on a multidimensional assessment technique, a more comprehensive estimate of dysfunctions in emotional reactivity in MDD was available than by self-report measures alone and this was unsheathed especially by the mood induction procedure relying on cartoons. The divergent facial and autonomic responses in the presence of unaffected subjective reactivity suggest an underlying deficit in the patients' ability to express the felt arousal to funny cartoons. Our results encourage the application of both procedures in functional imaging studies for investigating the neural substrates of emotion dysregulation in MDD patients. Mood induction via cartoons appears to be superior to mood induction via faces and autobiographical material in uncovering specific emotional dysfunctions in MDD

    Emotional design and human-robot interaction

    Get PDF
    Recent years have shown an increase in the importance of emotions applied to the Design field - Emotional Design. In this sense, the emotional design aims to elicit (e.g., pleasure) or prevent (e.g., displeasure) determined emotions, during human product interaction. That is, the emotional design regulates the emotional interaction between the individual and the product (e.g., robot). Robot design has been a growing area whereby robots are interacting directly with humans in which emotions are essential in the interaction. Therefore, this paper aims, through a non-systematic literature review, to explore the application of emotional design, particularly on Human-Robot Interaction. Robot design features (e.g., appearance, expressing emotions and spatial distance) that affect emotional design are introduced. The chapter ends with a discussion and a conclusion.info:eu-repo/semantics/acceptedVersio
    corecore