62 research outputs found

    Looking away from faces: influence of high-level visual processes on saccade programming

    No full text
    Human faces capture attention more than other visual stimuli. Here we investigated whether such face-specific biases rely on automatic (involuntary) or voluntary orienting responses. To this end, we used an anti-saccade paradigm, which requires the ability to inhibit a reflexive automatic response and to generate a voluntary saccade in the opposite direction of the stimulus. To control for potential low-level confounds in the eye-movement data, we manipulated the high-level visual properties of the stimuli while normalizing their global low-level visual properties. Eye movements were recorded in 21 participants who performed either pro- or anti-saccades to a face, car, or noise pattern, randomly presented to the left or right of a fixation point. For each trial, a symbolic cue instructed the observer to generate either a pro-saccade or an anti-saccade. We report a significant increase in anti-saccade error rates for faces compared to cars and noise patterns, as well as faster pro-saccades to faces and cars in comparison to noise patterns. These results indicate that human faces induce stronger involuntary orienting responses than other visual objects, i.e., responses that are beyond the control of the observer. Importantly, this involuntary processing cannot be accounted for by global low-level visual factors

    Putting culture under the spotlight reveals universal information use for face recognition

    Get PDF
    Background: Eye movement strategies employed by humans to identify conspecifics are not universal. Westerners predominantly fixate the eyes during face recognition, whereas Easterners more the nose region, yet recognition accuracy is comparable. However, natural fixations do not unequivocally represent information extraction. So the question of whether humans universally use identical facial information to recognize faces remains unresolved. Methodology/Principal Findings: We monitored eye movements during face recognition of Western Caucasian (WC) and East Asian (EA) observers with a novel technique in face recognition that parametrically restricts information outside central vision. We used ‘Spotlights’ with Gaussian apertures of 2°, 5° or 8° dynamically centered on observers’ fixations. Strikingly, in constrained Spotlight conditions (2°, 5°) observers of both cultures actively fixated the same facial information: the eyes and mouth. When information from both eyes and mouth was simultaneously available when fixating the nose (8°), as expected EA observers shifted their fixations towards this region. Conclusions/Significance: Social experience and cultural factors shape the strategies used to extract information from faces, but these results suggest that external forces do not modulate information use. Human beings rely on identical facial information to recognize conspecifics, a universal law that might be dictated by the evolutionary constraints of nature and not nurture

    Beyond Faces and Expertise:Facelike Holistic Processing of Nonface Objects in the Absence of Expertise

    Get PDF
    Holistic processing-the tendency to perceive objects as indecomposable wholes-has long been viewed as a process specific to faces or objects of expertise. Although current theories differ in what causes holistic processing, they share a fundamental constraint for its generalization: Nonface objects cannot elicit facelike holistic processing in the absence of expertise. Contrary to this prevailing view, here we show that line patterns with salient Gestalt information (i.e., connectedness, closure, and continuity between parts) can be processed as holistically as faces without any training. Moreover, weakening the saliency of Gestalt information in these patterns reduced holistic processing of them, which indicates that Gestalt information plays a crucial role in holistic processing. Therefore, holistic processing can be achieved not only via a top-down route based on expertise, but also via a bottom-up route relying merely on object-based information. The finding that facelike holistic processing can extend beyond the domains of faces and objects of expertise poses a challenge to current dominant theories

    Culture shapes how we look at faces

    Get PDF
    Background: Face processing, amongst many basic visual skills, is thought to be invariant across all humans. From as early as 1965, studies of eye movements have consistently revealed a systematic triangular sequence of fixations over the eyes and the mouth, suggesting that faces elicit a universal, biologically-determined information extraction pattern. Methodology/Principal Findings: Here we monitored the eye movements of Western Caucasian and East Asian observers while they learned, recognized, and categorized by race Western Caucasian and East Asian faces. Western Caucasian observers reproduced a scattered triangular pattern of fixations for faces of both races and across tasks. Contrary to intuition, East Asian observers focused more on the central region of the face. Conclusions/Significance: These results demonstrate that face processing can no longer be considered as arising from a universal series of perceptual events. The strategy employed to extract visual information from faces differs across cultures

    Category selectivity in human visual cortex:beyond visual object recognition

    Get PDF
    Item does not contain fulltextHuman ventral temporal cortex shows a categorical organization, with regions responding selectively to faces, bodies, tools, scenes, words, and other categories. Why is this? Traditional accounts explain category selectivity as arising within a hierarchical system dedicated to visual object recognition. For example, it has been proposed that category selectivity reflects the clustering of category-associated visual feature representations, or that it reflects category-specific computational algorithms needed to achieve view invariance. This visual object recognition framework has gained renewed interest with the success of deep neural network models trained to "recognize" objects: these hierarchical feed-forward networks show similarities to human visual cortex, including categorical separability. We argue that the object recognition framework is unlikely to fully account for category selectivity in visual cortex. Instead, we consider category selectivity in the context of other functions such as navigation, social cognition, tool use, and reading. Category-selective regions are activated during such tasks even in the absence of visual input and even in individuals with no prior visual experience. Further, they are engaged in close connections with broader domain-specific networks. Considering the diverse functions of these networks, category-selective regions likely encode their preferred stimuli in highly idiosyncratic formats; representations that are useful for navigation, social cognition, or reading are unlikely to be meaningfully similar to each other and to varying degrees may not be entirely visual. The demand for specific types of representations to support category-associated tasks may best account for category selectivity in visual cortex. This broader view invites new experimental and computational approaches.7 p

    Shape-independent object category responses revealed by MEG and fMRI decoding

    Get PDF
    Neuroimaging research has identified category-specific neural response patterns to a limited set of object categories. For example, faces, bodies, and scenes evoke activity patterns in visual cortex that are uniquely traceable in space and time. It is currently debated whether these apparently categorical responses truly reflect selectivity for categories or instead reflect selectivity for category-associated shape properties. In the present study, we used a cross-classification approach on functional MRI (fMRI) and magnetoencephalographic (MEG) data to reveal both category-independent shape responses and shape-independent category responses. Participants viewed human body parts (hands and torsos) and pieces of clothing that were closely shape-matched to the body parts (gloves and shirts). Category-independent shape responses were revealed by training multivariate classifiers on discriminating shape within one category (e.g., hands versus torsos) and testing these classifiers on discriminating shape within the other category (e.g., gloves versus shirts). This analysis revealed significant decoding in large clusters in visual cortex (fMRI) starting from 90 ms after stimulus onset (MEG). Shape-independent category responses were revealed by training classifiers on discriminating object category (bodies and clothes) within one shape (e.g., hands versus gloves) and testing these classifiers on discriminating category within the other shape (e.g., torsos versus shirts). This analysis revealed significant decoding in bilateral occipitotemporal cortex (fMRI) and from 130 to 200 ms after stimulus onset (MEG). Together, these findings provide evidence for concurrent shape and category selectivity in high-level visual cortex, including category-level responses that are not fully explicable by two-dimensional shape properties

    Face Inversion Reduces the Persistence of Global Form and Its Neural Correlates

    Get PDF
    Face inversion produces a detrimental effect on face recognition. The extent to which the inversion of faces and other kinds of objects influences the perceptual binding of visual information into global forms is not known. We used a behavioral method and functional MRI (fMRI) to measure the effect of face inversion on visual persistence, a type of perceptual memory that reflects sustained awareness of global form. We found that upright faces persisted longer than inverted versions of the same images; we observed a similar effect of inversion on the persistence of animal stimuli. This effect of inversion on persistence was evident in sustained fMRI activity throughout the ventral visual hierarchy, including the lateral occipital area (LO), two face-selective visual areas—the fusiform face area (FFA) and the occipital face area (OFA)—and several early visual areas. V1 showed the same initial fMRI activation to upright and inverted forms but this activation lasted longer for upright stimuli. The inversion effect on persistence-related fMRI activity in V1 and other retinotopic visual areas demonstrates that higher-tier visual areas influence early visual processing via feedback. This feedback effect on figure-ground processing is sensitive to the orientation of the figure

    i Map: a novel method for statistical fixation mapping of eye movement data

    Get PDF
    Eye movement data analyses are commonly based on the probability of occurrence of saccades and fixations (and their characteristics) in given regions of interest (ROIs). In this article, we introduce an alternative method for computing statistical fixation maps of eye movements—iMap—based on an approach inspired by methods used in functional magnetic resonance imaging. Importantly, iMap does not require the apriori segmentation of the experimental images into ROIs. With iMap, fixation data are first smoothed by convolving Gaussian kernels to generate three-dimensional fixation maps. This procedure embodies eyetracker accuracy, but the Gaussian kernel can also be flexibly set to represent acuity or attentional constraints. In addition, the smoothed fixation data generated by iMap conform to the assumptions of the robust statistical random field theory (RFT) approach, which is applied thereafter to assess significant fixation spots and differences across the three-dimensional fixation maps. The RFT corrects for the multiple statistical comparisons generated by the numerous pixels constituting the digital images. To illustrate the processing steps of iMap, we provide sample analyses of real eye movement data from face, visual scene, and memory processing. The iMap MATLAB toolbox is editable and freely available for download online (www.unifr.ch/psycho/ibmlab/

    Towards a model of human body perception

    Get PDF
    Item does not contain fulltextFrom just a glimpse of another person, we make inferences about their current states and longstanding traits. These inferences are normally spontaneous and effortless, yet they are crucial in shaping our impressions and behaviours towards other people. What are the perceptual operations involved in the rapid extraction of socially relevant information? To answer this question, over the last decade the visual and cognitive neuroscience of social stimuli has received new inputs through emerging proposals of social vision approaches. Perhaps by function of these contributions, researchers have reached a certain degree of consensus over a standard model of face perception. This thesis aims to extend social vision approaches to the case of human body perception. In doing so, it establishes the building blocks for a perceptual model of the human body which integrates the extraction of socially relevant information from the appearance of the body. Using visual tasks, the data show that perceptual representations of the human body are sensitive to socially relevant information (e.g. sex, weight, emotional expression). Specifically, in the first empirical chapter I dissect the perceptual representations of body sex. Using a visual search paradigm, I demonstrate a differential and asymmetrical representation of sex from human body shape. In the second empirical chapter, using the Garner selective attention task, I show that the dimension of body sex is independent from the information of emotional body postures. Finally, in the third empirical chapter, I provide evidence that category selective visual brain regions, including the body selective region EBA, are directly involved in forming perceptual expectations towards incoming visual stimuli. Socially relevant information of the body might shape visual representations of the body by acting as a set of expectancies available to the observer during perceptual operations. In the general discussion I address how the findings of the empirical chapters inform us about the perceptual encoding of human body shape. Further, I propose how these results provide the initial steps for a unified social vision model of human body perception. Finally, I advance the hypothesis that rapid social categorisation during perception is explained by mechanisms generally affecting the perceptual analysis of objects under naturalistic conditions (e.g. expectations-expertise) operating within the social domain.Bangor University, 17 februari 2020Promotor : Downing, P.E. Co-promotor : Koldewyn, K.182 p
    • …
    corecore