467 research outputs found

    Shared neural codes for visual and semantic information about familiar faces in a common representational space

    Get PDF
    Processes evoked by seeing a personally familiar face encompass recognition of visual appearance and activation of social and person knowledge. Whereas visual appearance is the same for all viewers, social and person knowledge may be more idiosyncratic. Using between-subject multivariate decoding of hyperaligned functional magnetic resonance imaging data, we investigated whether representations of personally familiar faces in different parts of the distributed neural system for face perception are shared across individuals who know the same people. We found that the identities of both personally familiar and merely visually familiar faces were decoded accurately across brains in the core system for visual processing, but only the identities of personally familiar faces could be decoded across brains in the extended system for processing nonvisual information associated with faces. Our results show that personal interactions with the same individuals lead to shared neural representations of both the seen and unseen features that distinguish their identities

    Decoding Neural Representational Spaces Using Multivariate Pattern Analysis

    Get PDF
    A major challenge for systems neuroscience is to break the neural code. Computational algorithms for encoding information into neural activity and extracting information from measured activity afford understanding of how percepts, memories, thought, and knowledge are represented in patterns of brain activity. The past decade and a half has seen significant advances in the development of methods for decoding human neural activity, such as multivariate pattern classification, representational similarity analysis, hyperalignment, and stimulus-model-based encoding and decoding. This article reviews these advances and integrates neural decoding methods into a common framework organized around the concept of high-dimensional representational spaces

    Function-based Intersubject Alignment of Human Cortical Anatomy

    Get PDF
    Making conclusions about the functional neuroanatomical organization of the human brain requires methods for relating the functional anatomy of an individual's brain to population variability. We have developed a method for aligning the functional neuroanatomy of individual brains based on the patterns of neural activity that are elicited by viewing a movie. Instead of basing alignment on functionally defined areas, whose location is defined as the center of mass or the local maximum response, the alignment is based on patterns of response as they are distributed spatially both within and across cortical areas. The method is implemented in the two-dimensional manifold of an inflated, spherical cortical surface. The method, although developed using movie data, generalizes successfully to data obtained with another cognitive activation paradigm—viewing static images of objects and faces—and improves group statistics in that experiment as measured by a standard general linear model (GLM) analysis

    Multiple Subject Barycentric Discriminant Analysis (MUSUBADA): How to Assign Scans to Categories without Using Spatial Normalization

    Get PDF
    We present a new discriminant analysis (DA) method called Multiple Subject Barycentric Discriminant Analysis (MUSUBADA) suited for analyzing fMRI data because it handles datasets with multiple participants that each provides different number of variables (i.e., voxels) that are themselves grouped into regions of interest (ROIs). Like DA, MUSUBADA (1) assigns observations to predefined categories, (2) gives factorial maps displaying observations and categories, and (3) optimally assigns observations to categories. MUSUBADA handles cases with more variables than observations and can project portions of the data table (e.g., subtables, which can represent participants or ROIs) on the factorial maps. Therefore MUSUBADA can analyze datasets with different voxel numbers per participant and, so does not require spatial normalization. MUSUBADA statistical inferences are implemented with cross-validation techniques (e.g., jackknife and bootstrap), its performance is evaluated with confusion matrices (for fixed and random models) and represented with prediction, tolerance, and confidence intervals. We present an example where we predict the image categories (houses, shoes, chairs, and human, monkey, dog, faces,) of images watched by participants whose brains were scanned. This example corresponds to a DA question in which the data table is made of subtables (one per subject) and with more variables than observations

    Differential activation of frontoparietal attention networks by social and symbolic spatial cues

    Get PDF
    Perception of both gaze-direction and symbolic directional cues (e.g. arrows) orient an observer’s attention toward the indicated location. It is unclear, however, whether these similar behavioral effects are examples of the same attentional phenomenon and, therefore, subserved by the same neural substrate. It has been proposed that gaze, given its evolutionary significance, constitutes a ‘special’ category of spatial cue. As such, it is predicted that the neural systems supporting spatial reorienting will be different for gaze than for non-biological symbols. We tested this prediction using functional magnetic resonance imaging to measure the brain’s response during target localization in which laterally presented targets were preceded by uninformative gaze or arrow cues. Reaction times were faster during valid than invalid trials for both arrow and gaze cues. However, differential patterns of activity were evoked in the brain. Trials including invalid rather than valid arrow cues resulted in a stronger hemodynamic response in the ventral attention network. No such difference was seen during trials including valid and invalid gaze cues. This differential engagement of the ventral reorienting network is consistent with the notion that the facilitation of target detection by gaze cues and arrow cues is subserved by different neural substrates

    Differential Activation of Frontoparietal Attention Networks by Social and Symbolic Spatial Cues

    Get PDF
    Perception of both gaze-direction and symbolic directional cues (e.g. arrows) orient an observer’s attention toward the indicated location. It is unclear, however, whether these similar behavioral effects are examples of the same attentional phenomenon and, therefore, subserved by the same neural substrate. It has been proposed that gaze, given its evolutionary significance, constitutes a ‘special’ category of spatial cue. As such, it is predicted that the neural systems supporting spatial reorienting will be different for gaze than for non-biological symbols. We tested this prediction using functional magnetic resonance imaging to measure the brain’s response during target localization in which laterally presented targets were preceded by uninformative gaze or arrow cues. Reaction times were faster during valid than invalid trials for both arrow and gaze cues. However, differential patterns of activity were evoked in the brain. Trials including invalid rather than valid arrow cues resulted in a stronger hemodynamic response in the ventral attention network. No such difference was seen during trials including valid and invalid gaze cues. This differential engagement of the ventral reorienting network is consistent with the notion that the facilitation of target detection by gaze cues and arrow cues is subserved by different neural substrates

    Beliefs about the Minds of Others Influence How We Process Sensory Information

    Get PDF
    Attending where others gaze is one of the most fundamental mechanisms of social cognition. The present study is the first to examine the impact of the attribution of mind to others on gaze-guided attentional orienting and its ERP correlates. Using a paradigm in which attention was guided to a location by the gaze of a centrally presented face, we manipulated participants' beliefs about the gazer: gaze behavior was believed to result either from operations of a mind or from a machine. In Experiment 1, beliefs were manipulated by cue identity (human or robot), while in Experiment 2, cue identity (robot) remained identical across conditions and beliefs were manipulated solely via instruction, which was irrelevant to the task. ERP results and behavior showed that participants' attention was guided by gaze only when gaze was believed to be controlled by a human. Specifically, the P1 was more enhanced for validly, relative to invalidly, cued targets only when participants believed the gaze behavior was the result of a mind, rather than of a machine. This shows that sensory gain control can be influenced by higher-order (task-irrelevant) beliefs about the observed scene. We propose a new interdisciplinary model of social attention, which integrates ideas from cognitive and social neuroscience, as well as philosophy in order to provide a framework for understanding a crucial aspect of how humans' beliefs about the observed scene influence sensory processing

    Social presence and dishonesty in retail

    Get PDF
    Self-service checkouts (SCOs) in retail can benefit consumers and retailers, providing control and autonomy to shoppers independent from staff, together with reduced queuing times. Recent research indicates that the absence of staff may provide the opportunity for consumers to behave dishonestly, consistent with a perceived lack of social presence. This study examined whether a social presence in the form of various instantiations of embodied, visual, humanlike SCO interface agents had an effect on opportunistic behaviour. Using a simulated SCO scenario, participants experienced various dilemmas in which they could financially benefit themselves undeservedly. We hypothesised that a humanlike social presence integrated within the checkout screen would receive more attention and result in fewer instances of dishonesty compared to a less humanlike agent. This was partially supported by the results. The findings contribute to the theoretical framework in social presence research. We concluded that companies adopting self-service technology may consider the implementation of social presence in technology applications to support ethical consumer behaviour, but that more research is required to explore the mixed findings in the current study.<br/

    An analysis of the time course of attention in preview search.

    Get PDF
    We used a probe dot procedure to examine the time course of attention in preview search (Watson and Humphreys, 1997). Participants searched for an outline red vertical bar among other new red horizontal bars and old green vertical bars, superimposed on a blue background grid. Following the reaction time response for search, the participants had to decide whether a probe dot had briefly been presented. Previews appeared for 1,000 msec and were immediately followed by search displays. In Experiment 1, we demonstrated a standard preview benefit relative to a conjunction search baseline. In Experiment 2, search was combined with the probe task. Probes were more difficult to detect when they were presented 1,200 msec, relative to 800 msec, after the preview, but at both intervals detection of probes at the locations of old distractors was harder than detection on new distractors or at neutral locations. Experiment 3A demonstrated that there was no difference in the detection of probes at old, neutral, and new locations when probe detection was the primary task and there was also no difference when all of the shapes appeared simultaneously in conjunction search (Experiment 3B). In a final experiment (Experiment 4), we demonstrated that detection on old items was facilitated (relative to neutral locations and probes at the locations of new distractors) when the probes appeared 200 msec after previews, whereas there was worse detection on old items when the probes followed 800 msec after previews. We discuss the results in terms of visual marking and attention capture processes in visual search

    Face Inversion Reduces the Persistence of Global Form and Its Neural Correlates

    Get PDF
    Face inversion produces a detrimental effect on face recognition. The extent to which the inversion of faces and other kinds of objects influences the perceptual binding of visual information into global forms is not known. We used a behavioral method and functional MRI (fMRI) to measure the effect of face inversion on visual persistence, a type of perceptual memory that reflects sustained awareness of global form. We found that upright faces persisted longer than inverted versions of the same images; we observed a similar effect of inversion on the persistence of animal stimuli. This effect of inversion on persistence was evident in sustained fMRI activity throughout the ventral visual hierarchy, including the lateral occipital area (LO), two face-selective visual areas—the fusiform face area (FFA) and the occipital face area (OFA)—and several early visual areas. V1 showed the same initial fMRI activation to upright and inverted forms but this activation lasted longer for upright stimuli. The inversion effect on persistence-related fMRI activity in V1 and other retinotopic visual areas demonstrates that higher-tier visual areas influence early visual processing via feedback. This feedback effect on figure-ground processing is sensitive to the orientation of the figure
    corecore