4,700 research outputs found

    Extended Calculations of Spectroscopic Data: Energy Levels, Lifetimes and Transition rates for O-like ions from Cr XVII to Zn XXIII

    Full text link
    Employing two state-of-the-art methods, multiconfiguration Dirac--Hartree--Fock and second-order many-body perturbation theory, the excitation energies and lifetimes for the lowest 200 states of the 2s22p42s^2 2p^4, 2s2p52s 2p^5, 2p62p^6, 2s22p33s2s^2 2p^3 3s, 2s22p33p2s^2 2p^3 3p, 2s22p33d2s^2 2p^3 3d, 2s2p43s2s 2p^4 3s, 2s2p43p2s 2p^4 3p, and 2s2p43d2s 2p^4 3d configurations, and multipole (electric dipole (E1), magnetic dipole (M1), and electric quadrupole (E2)) transition rates, line strengths, and oscillator strengths among these states are calculated for each O-like ion from Cr XVII to Zn XXIII. Our two data sets are compared with the NIST and CHIANTI compiled values, and previous calculations. The data are accurate enough for identification and deblending of new emission lines from the sun and other astrophysical sources. The amount of data of high accuracy is significantly increased for the n=3n = 3 states of several O-like ions of astrophysics interest, where experimental data are very scarce

    Changes in extreme sea-levels in the Baltic Sea

    Get PDF
    In a climate change context, changes in extreme sea-levels rather than changes in the mean are of particular interest from the coastal protection point of view. In this work, extreme sea-levels in the Baltic Sea are investigated based on daily tide gauge records for the period 1916–2005 using the annual block maxima approach. Extreme events are analysed based on the generalised extreme value distribution considering both stationary and time-varying models. The likelihood ratio test is applied to select between stationary and non-stationary models for the maxima and return values are estimated from the final model. As an independent and complementary approach, quantile regression is applied for comparison with the results from the extreme value approach. The rates of change in the uppermost quantiles are in general consistent and most pronounced for the northernmost stations

    Recognizing Emotions in a Foreign Language

    Get PDF
    Expressions of basic emotions (joy, sadness, anger, fear, disgust) can be recognized pan-culturally from the face and it is assumed that these emotions can be recognized from a speaker's voice, regardless of an individual's culture or linguistic ability. Here, we compared how monolingual speakers of Argentine Spanish recognize basic emotions from pseudo-utterances ("nonsense speech") produced in their native language and in three foreign languages (English, German, Arabic). Results indicated that vocal expressions of basic emotions could be decoded in each language condition at accuracy levels exceeding chance, although Spanish listeners performed significantly better overall in their native language ("in-group advantage"). Our findings argue that the ability to understand vocally-expressed emotions in speech is partly independent of linguistic ability and involves universal principles, although this ability is also shaped by linguistic and cultural variables

    Morphological variants of silent bared-teeth displays have different social interaction outcomes in crested macaques (Macaca nigra)

    Get PDF
    Objectives: While it has been demonstrated that even subtle variation in human facial expressions can lead to significant changes in the meaning and function of expressions, relatively few studies have examined primate facial expressions using similarly objective and rigorous analysis. Construction of primate facial expression repertoires may, therefore, be oversimplified, with expressions often arbitrarily pooled and/or split into subjective pigeonholes. Our objective is to assess whether subtle variation in primate facial expressions is linked to variation in function, and hence to inform future attempts to quantify complexity of facial communication. Materials and Methods: We used Macaque Facial Action Coding System, an anatomically based and hence more objective tool, to quantify “silent bared‐teeth” (SBT) expressions produced by wild crested macaques engaging in spontaneous behavior, and utilized discriminant analysis and bootstrapping analysis to look for morphological differences between SBT produced in four different contexts, defined by the outcome of interactions: Affiliation, Copulation, Play, and Submission. Results: We found that SBT produced in these contexts could be distinguished at significantly above‐chance rates, indicating that the expressions produced in these four contexts differ morphologically. We identified the specific facial movements that were typically used in each context, and found that the variability and intensity of facial movements also varied between contexts. Discussion: These results indicate that nonhuman primate facial expressions share the human characteristic of exhibiting meaningful subtle differences. Complexity of facial communication may not be accurately represented simply by building repertoires of distinct expressions, so further work should attempt to take this subtle variability into account

    Combining intention and emotional state inference in a dynamic neural field architecture for human-robot joint action

    Get PDF
    We report on our approach towards creating socially intelligent robots, which is heavily inspired by recent experimental findings about the neurocognitive mechanisms underlying action and emotion understanding in humans. Our approach uses neuro-dynamics as a theoretical language to model cognition, emotional states, decision making and action. The control architecture is formalized by a coupled system of dynamic neural fields representing a distributed network of local but connected neural populations. Different pools of neurons encode relevant information in the form of self-sustained activation patterns, which are triggered by input from connected populations and evolve continuously in time. The architecture implements a dynamic and flexible context-dependent mapping from observed hand and facial actions of the human onto adequate complementary behaviors of the robot that take into account the inferred goal and inferred emotional state of the co-actor. The dynamic control architecture was validated in multiple scenarios in which an anthropomorphic robot and a human operator assemble a toy object from its components. The scenarios focus on the robot’s capacity to understand the human’s actions, and emotional states, detect errors and adapt its behavior accordingly by adjusting its decisions and movements during the execution of the task.The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was possible in part by the funding of research grants from the Portuguese Foundation for Science and Technology (grant numbers SFRH/BD/48527/2008, SFRH/BPD/71874/2010, SFRH/BD/81334/2011), and with funding from FP6-IST2 EU-IP Project JAST (project number 003747) and FP7 Marie Curie ITN Neural Engineering Transformative Technologies NETT (project number 289146).info:eu-repo/semantics/publishedVersio

    Lie experts' beliefs about non-verbal indicators of deception

    Get PDF
    ABSTRACT.. Beliefs about behavioral clues to deception were investigated in 212 people, consisting of prisoners, police detectives, patrol police officers, prison guards, customs officers, and college students. Previous studies, mainly conducted with college students as subjects, showed that people have some incorrect beliefs about behavioral clues to deception. It was hypothesized that prisoners would have the best notion about clues of deception, due to the fact that they receive the most adequate feedback about successful deception strategies. The results supported this hypothesis

    Facial Expression Restoration Based on Improved Graph Convolutional Networks

    Full text link
    Facial expression analysis in the wild is challenging when the facial image is with low resolution or partial occlusion. Considering the correlations among different facial local regions under different facial expressions, this paper proposes a novel facial expression restoration method based on generative adversarial network by integrating an improved graph convolutional network (IGCN) and region relation modeling block (RRMB). Unlike conventional graph convolutional networks taking vectors as input features, IGCN can use tensors of face patches as inputs. It is better to retain the structure information of face patches. The proposed RRMB is designed to address facial generative tasks including inpainting and super-resolution with facial action units detection, which aims to restore facial expression as the ground-truth. Extensive experiments conducted on BP4D and DISFA benchmarks demonstrate the effectiveness of our proposed method through quantitative and qualitative evaluations.Comment: Accepted by MMM202

    Automatic detection of a driver’s complex mental states

    Get PDF
    Automatic classification of drivers’ mental states is an important yet relatively unexplored topic. In this paper, we define a taxonomy of a set of complex mental states that are relevant to driving, namely: Happy, Bothered, Concentrated and Confused. We present our video segmentation and annotation methodology of a spontaneous dataset of natural driving videos from 10 different drivers. We also present our real-time annotation tool used for labelling the dataset via an emotion perception experiment and discuss the challenges faced in obtaining the ground truth labels. Finally, we present a methodology for automatic classification of drivers’ mental states. We compare SVM models trained on our dataset with an existing nearest neighbour model pre-trained on posed dataset, using facial Action Units as input features. We demonstrate that our temporal SVM approach yields better results. The dataset’s extracted features and validated emotion labels, together with the annotation tool, will be made available to the research community

    The role of emotion in the learning of trustworthiness from eye-gaze : Evidence from facial electromyography.

    Get PDF
    When perception of gaze direction is congruent with the location of a target, attention is facilitated and responses are faster compared to when incongruent. Faces that consistently gaze congruently are also judged trustworthier than faces that consistently gaze incongruently. However, it’s unclear how gaze-cues elicit changes in trust. We measured facial electromyography (EMG) during an identity-contingent gaze-cueing task to examine whether embodied emotional reactions to gaze-cues mediate trust learning. Gaze-cueing effects were found to be equivalent regardless of whether participants showed learning of trust in the expected direction or did not. In contrast, we found distinctly different patterns of EMG activity in these two populations. In a further experiment we showed the learning effects were specific to viewing faces, as no changes in liking were detected when viewing arrows that evoked similar attentional orienting responses. These findings implicate embodied emotion in learning trust from identity-contingent gaze-cueing, possibly due to the social value of shared attention or deception rather than domain-general attentional orienting
    • 

    corecore