14 research outputs found

    Foveal to peripheral extrapolation of facial emotion.

    Get PDF
    Peripheral vision is characterized by poor resolution. Recent evidence from brightness perception suggests that missing information is filled out with information at fixation. Here we show a novel filling-out mechanism: when participants are presented with a crowd of faces, the perceived emotion of faces in peripheral vision is biased towards the emotion of the face at fixation. This mechanism is particularly important in social situations where people often need to perceive the overall mood of a crowd. Some faces in the crowd are more likely to catch people's attention and be looked at directly, while others are only seen peripherally. Our findings suggest that the perceived emotion of these peripheral faces, and the overall perceived mood of the crowd, is biased by the emotions of the faces that people look at directly

    Peripheral vision and pattern recognition:a review

    Get PDF
    We summarize the various strands of research on peripheral vision and relate them to theories of form perception. After a historical overview, we describe quantifications of the cortical magnification hypothesis, including an extension of Schwartz's cortical mapping function. The merits of this concept are considered across a wide range of psychophysical tasks, followed by a discussion of its limitations and the need for non-spatial scaling. We also review the eccentricity dependence of other low-level functions including reaction time, temporal resolution, and spatial summation, as well as perimetric methods. A central topic is then the recognition of characters in peripheral vision, both at low and high levels of contrast, and the impact of surrounding contours known as crowding. We demonstrate how Bouma's law, specifying the critical distance for the onset of crowding, can be stated in terms of the retinocortical mapping. The recognition of more complex stimuli, like textures, faces, and scenes, reveals a substantial impact of mid-level vision and cognitive factors. We further consider eccentricity-dependent limitations of learning, both at the level of perceptual learning and pattern category learning. Generic limitations of extrafoveal vision are observed for the latter in categorization tasks involving multiple stimulus classes. Finally, models of peripheral form vision are discussed. We report that peripheral vision is limited with regard to pattern categorization by a distinctly lower representational complexity and processing speed. Taken together, the limitations of cognitive processing in peripheral vision appear to be as significant as those imposed on low-level functions and by way of crowding

    Culture variation in the average identity extraction: The role of global vs. local processing orientation

    Get PDF
    Research has shown that observers often spontaneously extract a mean representation from multiple faces/objects in a scene even when this is not required by the task. This phenomenon, now known as ensemble coding, has so far mainly been based on data from Western populations. This study compared East Asian and Western participants in an implicit ensemble-coding task, where the explicit task was to judge whether a test face was present in a briefly exposed set of faces. Although both groups showed a tendency to mistake an average of the presented faces as target, thus confirming the universality of ensemble coding, East Asian participants displayed a higher averaging tendency relative to the Westerners. To further examine how a cultural default can be adapted to global or local processing demand, our second experiment tested the effects of priming global or local processing orientation on ensemble coding via a Navon task procedure. Results revealed a reduced tendency for ensemble coding following the priming of local processing orientation. Together, these results suggest that culture can influence the proneness to ensemble coding, and the default cultural mode is malleable to a temporary processing demand

    Interaction between visual attention and the processing of visual emotional stimuli in humans : eye-tracking, behavioural and event-related potential experiments

    Get PDF
    Past research has shown that the processing of emotional visual stimuli and visual attention are tightly linked together. In particular, emotional stimuli processing can modulate attention, and, reciprocally, the processing of emotional stimuli can be facilitated or inhibited by attentional processes. However, our understanding of these interactions is still limited, with much work remaining to be done to understand the characteristics of this reciprocal interaction and the different mechanisms that are at play. This thesis presents a series of experiments which use eye-tracking, behavioural and event-related potential (ERP) methods in order to better understand these interactions from a cognitive and neuroscientific point of view. First, the influence of emotional stimuli on eye movements, reflecting overt attention, was investigated. While it is known that the emotional gist of images attracts the eye (Calvo and Lang, 2004), little is known about the influence of emotional content on eye movements in more complex visual environments. Using eye-tracking methods, and by adapting a paradigm originally used to study the influence of semantic inconsistencies in scenes (Loftus and Mackworth, 1978), we found that participants spend more time fixating emotional than neutral targets embedded in visual scenes, but do not fixate them earlier. Emotional targets in scenes were therefore found to hold, but not to attract, the eye. This suggests that due to the complexity of the scenes and the limited processing resources available, the emotional information projected extra-foveally is not processed in such a way that it drives eye movements. Next, in order to better characterise the exogenous deployment of covert attention toward emotional stimuli, a sample of sub-clinically anxious individuals was studied. Anxiety is characterised by a reflexive attentional bias toward threatening stimuli. A dot-probe task (MacLeod et al., 1986) was designed to replicate and extend past findings of this attentional bias. In particular, the experiment was designed to test whether the bias was caused by faster reaction times to fear-congruent probes or slower reaction times to neutral-congruent probes. No attentional bias could be measured. A further analysis of the literature suggests that subliminal cue stimulus presentation, as used in our case, may not generate reliable attentional biases, unlike longer cue presentations. This would suggest that while emotional stimuli can be processed without awareness, further processing may be necessary to trigger reflexive attentional shifts in anxiety. Then the time-course of emotional stimulus processes and its modulation by attention was investigated. Modulations of the very early visual ERP C1 component by emotional stimuli (e.g. Pourtois et al., 2004; Stolarova et al., 2006), but also by visual attention (Kelly et al., 2008), were reported in the literature. A series of three experiments were performed, investigating the interactions between endogenous covert spatial attention and object-based attention with emotional stimuli processing in the C1 time window (50–100 ms). It was found that emotional stimuli modulated the C1 only when they were spatially attended and task-irrelevant. This suggests that whilst spatial attention gates emotional facial processing from the earliest stages, only incidental processing triggers a specific response before 100 ms. Additionally, the results suggest a very early modulation by feature-based attention which is independent from spatial attention. Finally, simulated and actual electroencephalographic data were used to show that modulations of early ERP and event-related field (ERF) components are highly dependent on the high-pass filter used in the pre-processing stage. A survey of the literature found that a large part of ERP/ERF reports (about 40%) use high-pass filters that may bias the results. More particularly, a large proportion of papers reporting very early modulations also use such filters. Consequently, a large part of the literature may need to be re-assessed. The work described in this thesis contributes to a better understanding of the links between emotional stimulus processing and attention at different levels. Using various experimental paradigms, this work confirms that emotional stimuli processing is not ‘automated’, but highly dependent on the focus of attention, even at the earlier stages of visual processing. Furthermore, the uncovered potential bias generated by filtering will help to improve the reliability and precision of research in the ERP/ERF field, and more particularly in studies looking at early effects

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Aerospace Medicine and Biology: A continuing bibliography with indexes, supplement 117, July 1973

    Get PDF
    This special bibliography lists 353 reports, articles, and other documents introduced into the NASA scientific and technical information system in July 1973

    Engineering Data Compendium. Human Perception and Performance, Volume 1

    Get PDF
    The concept underlying the Engineering Data Compendium was the product an R and D program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design of military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by system designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is Volume 1, which contains sections on Visual Acquisition of Information, Auditory Acquisition of Information, and Acquisition of Information by Other Senses

    Different roles of foveal and extrafoveal vision in ensemble representation for facial expressions

    No full text
    People could extract mean expression of multiple faces pretty precisely. However, the mechanism of how we make such ensemble representation was far from clear. This study aimed to explore how faces in the foveal and extrafoveal vision contribute to the ensemble representation and whether the emotion of faces modulates the contribution. In the experiment, the expressions of foveal and extrafoveal faces were independently manipulated by changing the ratio of happy vs. angry faces. The participants reported whether the overall emotion was positive or negative. The results showed that faces in the foveal vision were given more weight than those in the extrafoveal vision in ensemble emotional representation. In addition, the ensemble perception was more accurate when faces in the extrafoveal vision were positive. These findings have great implications for the emotional design in interactive systems, especially when there are multiple users or multiple avatars presented on the screen. © 2014 Springer International Publishing

    Different Roles of Foveal and Extrafoveal Vision in Ensemble Representation for Facial Expressions

    No full text
    People could extract mean expression of multiple faces pretty precisely.However, the mechanism of how we make such ensemble representation was far from clear. This study aimed to explore how faces in the foveal and extrafoveal vision contribute to the ensemble representation and whether the emotion of faces modulates the contribution. In the experiment, the expressions of foveal and extrafoveal faces were independently manipulated by changing the ratio of happy vs. angry faces. The participants reported whether the overall emotion was positive or negative. The results showed that faces in the foveal vision were given more weight than those in the extrafoveal vision in ensemble emotional representation. In addition, the ensemble perception was more accurate when faces in the extrafoveal vision were positive. These findings have great implications for the emotional design in interactive systems, especially when there are multiple users or multiple avatars presented on the screen.</p
    corecore