225,729 research outputs found
Facial aesthetics: babies prefer attractiveness to symmetry
The visual preferences of human infants for faces that varied in their attractiveness and in their symmetry about the midline were explored. The aim was to establish whether infants' visual preference for attractive faces may be mediated by the vertical symmetry of the face. Chimeric faces, made from photographs of attractive and unattractive female faces, were produced by computer graphics. Babies looked longer at normal and at chimeric attractive faces than at normal and at chimeric unattractive faces. There were no developmental differences between the younger and older infants: all preferred to look at the attractive faces. Infants as young as 4 months showed similarity with adults in the 'aesthetic perception' of attractiveness and this preference was not based on the vertical symmetry of the face
Recommended from our members
Perceptions of human attractiveness comprising face and voice cues
In human mate choice, sexually dimorphic faces and voices comprise hormone-mediated cues that purportedly develop as an indicator of mate quality or the ability to compete with same-sex rivals. If preferences for faces communicate the same biologically relevant information as do voices, then ratings of these cues should correlate. Sixty participants (30 male and 30 female) rated a series of opposite-sex faces, voices, and faces together with voices for attractiveness in a repeated measures computer-based experiment. The effects of face and voice attractiveness on face-voice compound stimuli were analyzed using a multilevel model. Faces contributed proportionally more than voices to ratings of face-voice compound attractiveness. Faces and voices positively and independently contributed to the attractiveness of male compound stimuli although there was no significant correlation between their rated attractiveness. A positive interaction and correlation between attractiveness was shown for faces and voices in relation to the attractiveness of female compound stimuli. Rather than providing a better estimate of a single characteristic, male faces and voices may instead communicate independent information that, in turn, provides a female with a better assessment of overall mate quality. Conversely, female faces and voices together provide males with a more accurate assessment of a single dimension of mate quality
Depression-related difficulties disengaging from negative faces are associated with sustained attention to negative feedback during social evaluation and predict stress recovery
The present study aimed to clarify: 1) the presence of depression-related attention bias related to a social stressor, 2) its association with depression-related attention biases as measured under standard conditions, and 3) their association with impaired stress recovery in depression. A sample of 39 participants reporting a broad range of depression levels completed a standard eye-tracking paradigm in which they had to engage/disengage their gaze with/from emotional faces. Participants then underwent a stress induction (i.e., giving a speech), in which their eye movements to false emotional feedback were measured, and stress reactivity and recovery were assessed. Depression level was associated with longer times to engage/disengage attention with/from negative faces under standard conditions and with sustained attention to negative feedback during the speech. These depression-related biases were associated and mediated the association between depression level and self-reported stress recovery, predicting lower recovery from stress after giving the speech
Affect Infusion and Detection through Faces in Computer-mediated Knowledge-sharing Decisions
Faces are important in both human communication and computer-mediated communication. In this study, I analyze the influence of emotional expressions in faces on knowledge-sharing decisions in a computer-mediated environment. I suggest that faces can be used for affect infusion and affect detection, which increases the effectiveness of knowledge-management systems. Using the affect infusion model, I discuss why emotions can be expected to influence knowledge-sharing decisions. Using the two-step primitive emotional contagion framework, I found that emotional facial expression attached to a knowledge-sharing request influenced knowledge-sharing decisions. This influence was mediated by the decision maker’s emotional valence in the facial expression tracked by Face Reader technology and held for females but not males. I discuss implications for designers of emotionally intelligent information systems and research
Emotional Faces Capture Spatial Attention in 5-Year-Old Children
Emotional facial expressions are important social cues that convey salient affective information. Infants, younger children, and adults all appear to orient spatial attention to emotional faces with a particularly strong bias to fearful faces. Yet in young children it is unclear whether or not both happy and fearful faces extract attention. Given that the processing of emotional faces is believed by some to serve an evolutionarily adaptive purpose, attentional biases to both fearful and happy expressions would be expected in younger children. However, the extent to which this ability is present in young children and whether or not this ability is genetically mediated is untested. Therefore, the aims of the current study were to assess the spatial-attentional properties of emotional faces in young children, with a preliminary test of whether this effect was influenced by genetics. Five-year-old twin pairs performed a dot-probe task. The results suggest that children preferentially direct spatial attention to emotional faces, particularly right visual field faces. The results provide support for the notion that the direction of spatial attention to emotional faces serves an evolutionarily adaptive function and may be mediated by genetic mechanisms
Recommended from our members
Smile asymmetries and reputation as reliable indicators of likelihood to cooperate: An evolutionary analysis
Cooperating with individuals whose altruism is not motivated by genuine prosocial emotions could have been costly in ancestral division of labour partnerships. How do humans ‘know’ whether or not an individual has the prosocial emotions committing future cooperation? Frank (1988) has hypothesized two pathways for altruist-detection: (a) facial expressions of emotions signalling character; and (b) gossip regarding the target individual’s reputation. Detecting non-verbal cues signalling commitment to cooperate may be one way to avoid the costs of exploitation. Spontaneous smiles while cooperating may be reliable index cues because of the physiological constraints controlling the neural pathways mediating involuntary emotional expressions. Specifically, it is hypothesized that individuals whose help is mediated by a genuine sympathy will express involuntary smiles (which are observably different from posed smiles). To investigate this idea, 38 participants played dictator games (i.e. a unilateral resource allocation task) against cartoon faces with a benevolent emotional expression (i.e. concern furrows and smile). The faces were presented with information regarding reputation (e.g. descriptions of an altruistic character vs. a non-altruistic character). Half of the sample played against icons with symmetrical smiles (representing a spontaneous smile) while the other half played against asymmetrically smiling icons (representing a posed smile). Icons described as having altruistic motives received more resources than icons described as self-interested helpers. Faces with symmetrical smiles received more resources than faces with asymmetrical smiles. These results suggest that reputation and smile asymmetry influence the likelihood of cooperation and thus may be reliable cues to altruism. These cues may allow for altruists to garner more resources in division of labour situations
Face scanning and spontaneous emotion preference in Cornelia de Lange syndrome and Rubinstein-Taybi syndrome
Background
Existing literature suggests differences in face scanning in individuals with different socio-behavioural characteristics. Cornelia de Lange syndrome (CdLS) and Rubinstein-Taybi syndrome (RTS) are two genetically defined neurodevelopmental disorders with unique profiles of social behaviour.
Methods
Here, we examine eye gaze to the eye and mouth regions of neutrally expressive faces, as well as the spontaneous visual preference for happy and disgusted facial expressions compared to neutral faces, in individuals with CdLS versus RTS.
Results
Results indicate that the amount of time spent looking at the eye and mouth regions of faces was similar in 15 individuals with CdLS and 17 individuals with RTS. Both participant groups also showed a similar pattern of spontaneous visual preference for emotions.
Conclusions
These results provide insight into two rare, genetically defined neurodevelopmental disorders that have been reported to exhibit contrasting socio-behavioural characteristics and suggest that differences in social behaviour may not be sufficient to predict attention to the eye region of faces. These results also suggest that differences in the social behaviours of these two groups may be cognitively mediated rather than subcortically mediated
Where Bottom-up Meets Top-down: Neuronal Interactions during Perception and Imagery
Functional magnetic resonance imaging (fMRI) studies have identified category-selective regions in ventral occipito-temporal cortex that respond preferentially to faces and other objects. The extent to which these patterns of activation are modulated by bottom-up or top-down mechanisms is currently unknown. We combined fMRI and dynamic causal modelling to investigate neuronal interactions between occipito-temporal, parietal and frontal regions, during visual perception and visual imagery of faces, houses and chairs. Our results indicate that, during visual perception, category-selective patterns of activation in extrastriate cortex are mediated by content-sensitive forward connections from early visual areas. In contrast, during visual imagery, category-selective activation is mediated by content-sensitive backward connections from prefrontal cortex. Additionally, we report content-unrelated connectivity between parietal cortex and the category-selective regions, during both perception and imagery. Thus, our investigation revealed that neuronal interactions between occipito-temporal, parietal and frontal regions are task- and stimulus-dependent. Sensory representations of faces and objects are mediated by bottom-up mechanisms arising in early visual areas and top-down mechanisms arising in prefrontal cortex, during perception and imagery respectively. Additionally non-selective, top-down processes, originating in superior parietal areas, contribute to the generation of mental images, regardless of their content, and their maintenance in the ‘mind's eye
Neural correlates of emotional valence for faces and words
: Stimuli with negative emotional valence are especially apt to influence perception and action because of their crucial role in survival, a property that may not be precisely mirrored by positive emotional stimuli of equal intensity. The aim of this study was to identify the neural circuits differentially coding for positive and negative valence in the implicit processing of facial expressions and words, which are among the main ways human beings use to express emotions. Thirty-six healthy subjects took part in an event-related fMRI experiment. We used an implicit emotional processing task with the visual presentation of negative, positive, and neutral faces and words, as primary stimuli. Dynamic Causal Modeling (DCM) of the fMRI data was used to test effective brain connectivity within two different anatomo-functional models, for the processing of words and faces, respectively. In our models, the only areas showing a significant differential response to negative and positive valence across both face and word stimuli were early visual cortices, with faces eliciting stronger activations. For faces, DCM revealed that this effect was mediated by a facilitation of activity in the amygdala by positive faces and in the fusiform face area by negative faces; for words, the effect was mainly imputable to a facilitation of activity in the primary visual cortex by positive words. These findings support a role of early sensory cortices in discriminating the emotional valence of both faces and words, where the effect may be mediated chiefly by the subcortical/limbic visual route for faces, and rely more on the direct thalamic pathway to primary visual cortex for words
FIAEs in famous faces are mediated by type of processing
An important question regarding face aftereffects is whether it is based on face-specific or lower-level mechanisms. One method for addressing this is to explore how adaptation in upright or inverted, photographic positive or negative faces transfers to test stimuli that are either upright or inverted and normal or negated. A series of studies are reported in which this is tested using a typical face identity aftereffect paradigm in unfamiliar and famous faces. Results showed that aftereffects were strongest when the adaptor matched the test stimuli. In addition, aftereffects did not transfer from upright adaptors to inverted test images, but did transfer from inverted adaptors to upright test images in famous faces. However, in unfamiliar faces, a different pattern was observed. The results are interpreted in terms of how identity adaptation interacts with low-level adaptation and highlight differences in the representation of famous and unfamiliar faces
- …