37 research outputs found
Spotting Agreement and Disagreement: A Survey of Nonverbal Audiovisual Cues and Tools
While detecting and interpreting temporal patterns of nonâverbal behavioral cues in a given context is a natural and often unconscious process for humans, it remains a rather difficult task for computer systems. Nevertheless, it is an important one to achieve if the goal is to realise a naturalistic communication between humans and machines. Machines that are able to sense social attitudes like agreement and disagreement and respond to them in a meaningful way are likely to be welcomed by users due to the more natural, efficient and humanâcentered interaction they are bound to experience. This paper surveys the nonverbal cues that could be present during agreement and disagreement behavioural displays and lists a number of tools that could be useful in detecting them, as well as a few publicly available databases that could be used to train these tools for analysis of spontaneous, audiovisual instances of agreement and disagreement
A psycho-ethological approach to social signal processing
The emerging field of social signal processing can benefit from a theoretical framework to guide future research activities. The present article aims at drawing attention to two areas of research that devoted considerable efforts to the understanding of social behaviour: ethology and social psychology. With a long tradition in the study of animal signals, ethology and evolutionary biology have developed theoretical concepts to account for the functional significance of signalling. For example, the consideration of divergent selective pressures responsible for the evolution of signalling and social cognition emphasized the importance of two classes of indicators: informative cues and communicative signals. Social psychology, on the other hand, investigates emotional expression and interpersonal relationships, with a focus on the mechanisms underlying the production and interpretation of social signals and cues. Based on the theoretical considerations developed in these two fields, we propose a model that integrates the processing of perceivable individual features (social signals and cues) with contextual information, and we suggest that output of computer-based processing systems should be derived in terms of functional significance rather than in terms of absolute conceptual meanin
The proximate mechanisms and ultimate functions of smiles
Niedenthal et al's classification of smiles erroneously conflates psychological mechanisms and adaptive functions. This confusion weakens the rationale behind the types of smiles they chose to individuate, and it obfuscates the distinction between the communicative versus denotative nature of smiles and the role of perceived-gaze direction in emotion recognitio
Conceptual analysis of social signals: the importance of clarifying terminology
As a burgeoning field, Social Signal Processing (SSP) needs a solid grounding in the disciplines that have developed important concepts in the study of communication. However, the number and diversity of terms developed in linguistics, psychology, and the behavioural sciences may seem confusing for scholars who are not versed in the subtleties of conceptual analysis and theoretical developments. Indeed, different disciplines sometimes use the same term to mean different things or, conversely, use different terms to mean the same thing. The goals of this article are to present an overview of the different concepts developed in the various disciplines that studied animal and human communication, and to understand the differences and commonalities between concepts emerging from these disciplines. We conclude that such an understanding will greatly improve the efficiency of pluridisciplinary research projects, for the advancement of SSP requires that we look at the complexity of communication from different angle
An evolutionary approach to human social behaviour : the case of smiling and laughing
Living in large groups is, for many species, an adaptive solution to survival and
reproductive issues. It followed that in primates, and even more so humans,
communication evolved into a complex signalling system that includes language, nonverbal
vocalisations such as laughter, and facial expressions. A series of studies were
designed to address the function of smiling and laughter through an analysis of context
and consequences. First, naturalistic observations were conducted in areas where
people could be watched interacting in stable social groups. Focal sampling of men
and women allowed the recording of smiling and laughter frequencies, as well as other
interpersonal aspects such as talking and listening time, and body contacts. Smiles
were classified in two categories: spontaneous and forced. A test based on predictions
derived from three hypotheses (mate choice, social competition, and cooperation)
revealed that spontaneous smiling and laughter are likely to be involved in the
formation of cooperative relationships. A closer examination of dyadic interactions
revealed that smiling was related to talking and listening time, whereas female's
vocalised laughter positively affected the partner's speech output. Finally, smiling and
laughter rates increased the probability of observing affiliative body contacts between
individuals. A second set of studies investigated the possibility that smiling could (1)
advertise attributes relevant to the formation of social relationships, and (2) be a honest
signal of altruistic dispositions. The assessment of various traits was examined through
people's judgments of neutral and smiling photographs. Results showed that smiling
faces were perceived as being significantly more attractive, more generous, healthier,
more agreeable, more extroverted, and more open to experiences than their neutral
counterparts. Interestingly, men were influenced by smiling in a much larger extent
than women, particularly when smiling faces were female's. The rating study also
revealed that people who displayed smiles involving an emotional component
(Duchenne smiles) received higher scores on extroversion and generosity than people
who did not, indicating that people's ratings of sociability and generosity are sensitive
to facial movements that are not easy to produce on purpose. A final study investigated
the effect of bargaining contexts on smiling and laughter rates between friends.
Analysis of videotaped interactions showed that Duchenne smiling and vocalised
laughter were displayed at significantly higher rates when people were involved in the
sharing of material resources (as opposed to a control interaction). Moreover, data
confirmed that Duchenne smiling could be a reliable signal of altruism, as its
frequency of occurrence in the bargaining interaction was positively affected by
measures of altruism. Finally, results showed that smiling and laughter could advertise
personality traits as well as aspects of the relationship between sender and receiver. All
in all, the present thesis indicates that smiling and laughter could be used adaptively to
develop social alliances, and that this bonding process would entail the reliable
advertisement of evolutionarily relevant attributes. The relevance of smiling to a
behavioural style based on cooperation and prosocial activities is also discussed
Multimodal Integration of Dynamic Audio-Visual Cues in the Communication of Agreement and Disagreement
Recent research has stressed the importance of using multimodal and dynamic features to investigate the role of nonverbal behavior in social perception. This paper examines the influence of low-level visual and auditory cues on the communication of agreement, disagreement, and the ability to convince others. In addition, we investigate whether the judgment of these attitudes depends on ratings of socio-emotional dimensions such as dominance, arousal, and valence. The material we used consisted of audio-video excerpts that represent statements of agreement and disagreement, as well as neutral utterances taken from political discussions. Each excerpt was rated on a number of dimensions: agreement, disagreement, dominance, valence, arousal, and convincing power in three rating conditions: audio-only, video-only, and audio-video. We extracted low-level dynamic visual features using optical flow. Auditory features consisted of pitch measurements, vocal intensity, and articulation rate. Results show that judges were able to distinguish statements of disagreement from agreement and neutral utteranceson the basis of nonverbal cues alone, in particular, when both auditory and visual information were present. Visual features were more influential when presented along with auditory features. Perceivers mainly used changes in pitch and the maximum speed of vertical movements to infer agreement and disagreement, and targets appeared more convincing when they showed consistent and rapid movements on the vertical plane. The effect of nonverbal features on ratings of agreement and disagreement was completely mediated by ratings of dominance, valence, and arousal, indicating that the impact of low-level audio-visual features on the perception of agreement and disagreement depends on the perception of fundamental socio-emotional dimensions
Application of facial neuromuscular electrical stimulation (fNMES) in psychophysiological research: Practical recommendations based on a systematic review of the literature.
Facial neuromuscular electrical stimulation (fNMES), which allows for the non-invasive and physiologically sound activation of facial muscles, has great potential for investigating fundamental questions in psychology and neuroscience, such as the role of proprioceptive facial feedback in emotion induction and emotion recognition, and may serve for clinical applications, such as alleviating symptoms of depression. However, despite illustrious origins in the 19th-century work of Duchenne de Boulogne, the practical application of fNMES remains largely unknown to today's researchers in psychology. In addition, published studies vary dramatically in the stimulation parameters used, such as stimulation frequency, amplitude, duration, and electrode size, and in the way they reported them. Because fNMES parameters impact the comfort and safety of volunteers, as well as its physiological (and psychological) effects, it is of paramount importance to establish recommendations of good practice and to ensure studies can be better compared and integrated. Here, we provide an introduction to fNMES, systematically review the existing literature focusing on the stimulation parameters used, and offer recommendations on how to safely and reliably deliver fNMES and on how to report the fNMES parameters to allow better cross-study comparison. In addition, we provide a free webpage, to easily visualise fNMES parameters and verify their safety based on current density. As an example of a potential application, we focus on the use of fNMES for the investigation of the facial feedback hypothesis
Zygomaticus activation through facial neuromuscular electrical stimulation (fNMES) induces happiness perception in ambiguous facial expressions and affects neural correlates of face processing.
The role of facial feedback in facial emotion recognition remains controversial, partly due to limitations of the existing methods to manipulate the activation of facial muscles, such as voluntary posing of facial expressions or holding a pen in the mouth. These procedures are indeed limited in their control over which muscles are (de)activated when and to what degree. To overcome these limitations and investigate in a more controlled way if facial emotion recognition is modulated by one's facial muscle activity, we used computer-controlled facial neuromuscular electrical stimulation (fNMES). In a pre-registered EEG experiment, ambiguous facial expressions were categorised as happy or sad by 47 participants. In half of the trials, weak smiling was induced through fNMES delivered to the bilateral Zygomaticus Major muscle for 500âms. The likelihood of categorising ambiguous facial expressions as happy was significantly increased with fNMES, as shown with frequentist and Bayesian linear mixed models. Further, fNMES resulted in a reduction of P1, N170 and LPP amplitudes. These findings suggest that fNMES-induced facial feedback can bias facial emotion recognition and modulate the neural correlates of face processing. We conclude that fNMES has potential as a tool for studying the effects of facial feedback
EmoSex: Emotion prevails over sex in implicit judgments of faces and voices
Appraisals can be influenced by cultural beliefs and stereotypes. In line with this, past research has shown that judgments about the emotional expression of a face are influenced by the faceâs sex, and vice versa that judgments about the sex of a person somewhat depend on the personâs facial expression. For example, participants associate anger with male faces, and female faces with happiness or sadness. However, the strength and the bidirectionality of these effects remain debated. Moreover, the interplay of a stimulusâ emotion and sex remains mostly unknown in the auditory domain. To investigate these questions, we created a novel stimulus set of 121 avatar faces and 121 human voices (available at https://bit.ly/2JkXrpy) with matched, fine-scale changes along the emotional (happy to angry) and sexual (male to female) dimensions. In a first experiment (N=76), we found clear evidence for the mutual influence of facial emotion and sex cues on ratings, and moreover for larger implicit (task-irrelevant) effects of stimulusâ emotion than of sex. These findings were replicated and extended in two preregistered studies â one laboratory categorisation study using the same face stimuli (N=108; https://osf.io/ve9an), and one online study with vocalisations (N=72; https://osf.io/vhc9g). Overall, results show that the associations of maleness-anger and femaleness-happiness exist across sensory modalities, and suggest that emotions expressed in the face and voice cannot be entirely disregarded, even when attention is mainly focused on determining stimulusâ sex. We discuss the relevance of these findings for cognitive and neural models of face and voice processing
An evolutionary approach to human social behaviour : the case of smiling and laughing
EThOS - Electronic Theses Online ServiceGBUnited Kingdo