8,049 research outputs found
A nod in the wrong direction : Does nonverbal feedback affect eyewitness confidence in interviews?
Eyewitnesses can be influenced by an interviewer's behaviour and report information with inflated confidence as a result. Previous research has shown that positive feedback administered verbally can affect the confidence attributed to testimony, but the effect of non-verbal influence in interviews has been given little attention. This study investigated whether positive or negative non-verbal feedback could affect the confidence witnesses attribute to their responses. Participants witnessed staged CCTV footage of a crime scene and answered 20 questions in a structured interview, during which they were given either positive feedback (a head nod), negative feedback (a head shake) or no feedback. Those presented with positive non-verbal feedback reported inflated confidence compared with those presented with negative non-verbal feedback regardless of accuracy, and this effect was most apparent when participants reported awareness of the feedback. These results provide further insight into the effects of interviewer behaviour in investigative interviewsPeer reviewedFinal Accepted Versio
The effect of relationship status on communicating emotions through touch
Research into emotional communication to date has largely focused on facial and vocal expressions. In contrast, recent studies by Hertenstein, Keltner, App, Bulleit, and Jaskolka (2006) and Hertenstein, Holmes, McCullough, and Keltner (2009) exploring nonverbal communication of emotion discovered that people could identify anger, disgust, fear, gratitude, happiness, love, sadness and sympathy from the experience of being touched on either the arm or body by a stranger, without seeing the touch. The study showed that strangers were unable to communicate the self-focused emotions embarrassment, envy and pride, or the universal emotion surprise. Literature relating to touch indicates that the interpretation of a tactile experience is significantly influenced by the relationship between the touchers (Coan, Schaefer, & Davidson, 2006). The present study compared the ability of romantic couples and strangers to communicate emotions solely via touch. Results showed that both strangers and romantic couples were able to communicate universal and prosocial emotions, whereas only romantic couples were able to communicate the self-focused emotions envy and pride
Distinguishing Posed and Spontaneous Smiles by Facial Dynamics
Smile is one of the key elements in identifying emotions and present state of
mind of an individual. In this work, we propose a cluster of approaches to
classify posed and spontaneous smiles using deep convolutional neural network
(CNN) face features, local phase quantization (LPQ), dense optical flow and
histogram of gradient (HOG). Eulerian Video Magnification (EVM) is used for
micro-expression smile amplification along with three normalization procedures
for distinguishing posed and spontaneous smiles. Although the deep CNN face
model is trained with large number of face images, HOG features outperforms
this model for overall face smile classification task. Using EVM to amplify
micro-expressions did not have a significant impact on classification accuracy,
while the normalizing facial features improved classification accuracy. Unlike
many manual or semi-automatic methodologies, our approach aims to automatically
classify all smiles into either `spontaneous' or `posed' categories, by using
support vector machines (SVM). Experimental results on large UvA-NEMO smile
database show promising results as compared to other relevant methods.Comment: 16 pages, 8 figures, ACCV 2016, Second Workshop on Spontaneous Facial
Behavior Analysi
Extended transition rates and lifetimes in Al I and Al II from systematic multiconfiguration calculations
Multiconfiguration Dirac-Hartree-Fock (MCDHF) and relativistic configuration
interaction (RCI) calculations were performed for 28 and 78 states in neutral
and singly ionized aluminium, respectively. In Al I, the configurations of
interest are for with to , as well as and
for . In Al II, the studied configurations are, besides the
ground configuration , with to and to , ,
, and . Valence and core-valence electron correlation
effects are systematically accounted for through large configuration state
function (CSF) expansions. Calculated excitation energies are found to be in
excellent agreement with experimental data from the NIST database. Lifetimes
and transition data for radiative electric dipole (E1) transitions are given
and compared with results from previous calculations and available
measurements, for both Al I and Al II. The computed lifetimes of Al I are in
very good agreement with the measured lifetimes in high-precision laser
spectroscopy experiments. The present calculations provide a substantial amount
of updated atomic data, including transition data in the infrared region. This
is particularly important since the new generation of telescopes are designed
for this region. There is a significant improvement in accuracy, in particular
for the more complex system of neutral Al I. The complete tables of transition
data are available
Spatio-Temporal Sentiment Hotspot Detection Using Geotagged Photos
We perform spatio-temporal analysis of public sentiment using geotagged photo
collections. We develop a deep learning-based classifier that predicts the
emotion conveyed by an image. This allows us to associate sentiment with place.
We perform spatial hotspot detection and show that different emotions have
distinct spatial distributions that match expectations. We also perform
temporal analysis using the capture time of the photos. Our spatio-temporal
hotspot detection correctly identifies emerging concentrations of specific
emotions and year-by-year analyses of select locations show there are strong
temporal correlations between the predicted emotions and known events.Comment: To appear in ACM SIGSPATIAL 201
Word contexts enhance the neural representation of individual letters in early visual cortex
Visual context facilitates perception, but how this is neurally implemented remains unclear. One example of contextual facilitation is found in reading, where letters are more easily identified when embedded in a word. Bottom-up models explain this word advantage as a post-perceptual decision bias, while top-down models propose that word contexts enhance perception itself. Here, we arbitrate between these accounts by presenting words and nonwords and probing the representational fidelity of individual letters using functional magnetic resonance imaging. In line with top-down models, we find that word contexts enhance letter representations in early visual cortex. Moreover, we observe increased coupling between letter information in visual cortex and brain activity in key areas of the reading network, suggesting these areas may be the source of the enhancement. Our results provide evidence for top-down representational enhancement in word recognition, demonstrating that word contexts can modulate perceptual processing already at the earliest visual regions
Inversion improves the recognition of facial expression in thatcherized images
The Thatcher illusion provides a compelling example of the face inversion effect. However, the marked effect of inversion in the Thatcher illusion contrasts to other studies that report only a small effect of inversion on the recognition of facial expressions. To address this discrepancy, we compared the effects of inversion and thatcherization on the recognition of facial expressions. We found that inversion of normal faces caused only a small reduction in the recognition of facial expressions. In contrast, local inversion of facial features in upright thatcherized faces resulted in a much larger reduction in the recognition of facial expressions. Paradoxically, inversion of thatcherized faces caused a relative increase in the recognition of facial expressions. Together, these results suggest that different processes explain the effects of inversion on the recognition of facial expressions and on the perception of the Thatcher illusion. The grotesque perception of thatcherized images is based on a more orientation-sensitive representation of the face. In contrast, the recognition of facial expression is dependent on a more orientation-insensitive representation. A similar pattern of results was evident when only the mouth or eye region was visible. These findings demonstrate that a key component of the Thatcher illusion is to be found in orientation-specific encoding of the features of the face
Deception and self-awareness
This paper presents a study conducted for the Shades of Grey EPSRC research project (EP/H02302X/1), which aims to develop a suite of interventions for identifying terrorist activities. The study investigated the body movements demonstrated by participants while waiting to be interviewed, in one of two conditions: preparing to lie or preparing to tell the truth. The effect of self-awareness was also investigated, with half of the participants sitting in front of a full length mirror during the waiting period. The other half faced a blank wall. A significant interaction was found for the duration of hand/arm movements between the deception and self-awareness conditions (F=4.335, df=1;76, p<0.05). Without a mirror, participants expecting to lie spent less time moving their hands than those expecting to tell the truth; the opposite was seen in the presence of a mirror. This finding indicates a new research area worth further investigation
Cultural-based visual expression: Emotional analysis of human face via Peking Opera Painted Faces (POPF)
© 2015 The Author(s) Peking Opera as a branch of Chinese traditional cultures and arts has a very distinct colourful facial make-up for all actors in the stage performance. Such make-up is stylised in nonverbal symbolic semantics which all combined together to form the painted faces to describe and symbolise the background, the characteristic and the emotional status of specific roles. A study of Peking Opera Painted Faces (POPF) was taken as an example to see how information and meanings can be effectively expressed through the change of facial expressions based on the facial motion within natural and emotional aspects. The study found that POPF provides exaggerated features of facial motion through images, and the symbolic semantics of POPF provides a high-level expression of human facial information. The study has presented and proved a creative structure of information analysis and expression based on POPF to improve the understanding of human facial motion and emotion
Deep Adaptive Attention for Joint Facial Action Unit Detection and Face Alignment
Facial action unit (AU) detection and face alignment are two highly
correlated tasks since facial landmarks can provide precise AU locations to
facilitate the extraction of meaningful local features for AU detection. Most
existing AU detection works often treat face alignment as a preprocessing and
handle the two tasks independently. In this paper, we propose a novel
end-to-end deep learning framework for joint AU detection and face alignment,
which has not been explored before. In particular, multi-scale shared features
are learned firstly, and high-level features of face alignment are fed into AU
detection. Moreover, to extract precise local features, we propose an adaptive
attention learning module to refine the attention map of each AU adaptively.
Finally, the assembled local features are integrated with face alignment
features and global features for AU detection. Experiments on BP4D and DISFA
benchmarks demonstrate that our framework significantly outperforms the
state-of-the-art methods for AU detection.Comment: This paper has been accepted by ECCV 201
- …
