62 research outputs found

    Naturalistic stimuli reveal a dominant role for agentic action in visual representation

    Get PDF
    Abstract Naturalistic, dynamic movies evoke strong, consistent, and information-rich patterns of activity over a broad expanse of cortex and engage multiple perceptual and cognitive systems in parallel. The use of naturalistic stimuli enables functional brain imaging research to explore cognitive domains that are poorly sampled in highly-controlled experiments. These domains include perception and understanding of agentic action, which plays a larger role in visual representation than was appreciated from experiments using static, controlled stimuli

    How familiarity warps representation in the face space

    Get PDF
    Recognition of familiar as compared to unfamiliar faces is robust and resistant to marked image distortion or degradation. Here we tested the flexibility of familiar face recognition with a morphing paradigm where the appearance of a personally familiar face was mixed with the appearance of a stranger (Experiment 1) and the appearance of one's own face with the appearance of a familiar face and the appearance of a stranger (Experiment 2). The aim of the two experiments was to assess how categorical boundaries for recognition of identity are affected by familiarity. We found a narrower categorical boundary for the identity of personally familiar faces when they were mixed with unfamiliar identities as compared to the control condition, in which the appearance of two unfamiliar faces was mixed. Our results suggest that familiarity warps the representational geometry of face space, amplifying perceptual distances for small changes in the appearance of familiar faces that are inconsistent with the structural features that define their identities. Significance statement Familiar faces are recognized robustly despite image degradation, differences in lighting, head position, or distance. Here, we investigated the flexibility of familiar face recognition with two separate experiments using a morphing paradigm. Our data suggest that a familiar face occupies a sector of perceptual face space that is expanded relative to its extent based on differences in measured physical similarity. This expansion in representational space may be part of a more general mechanism that could explain how learning can facilitate processing of behaviorally relevant stimuli

    Shared neural codes for visual and semantic information about familiar faces in a common representational space

    Get PDF
    Processes evoked by seeing a personally familiar face encompass recognition of visual appearance and activation of social and person knowledge. Whereas visual appearance is the same for all viewers, social and person knowledge may be more idiosyncratic. Using between-subject multivariate decoding of hyperaligned functional magnetic resonance imaging data, we investigated whether representations of personally familiar faces in different parts of the distributed neural system for face perception are shared across individuals who know the same people. We found that the identities of both personally familiar and merely visually familiar faces were decoded accurately across brains in the core system for visual processing, but only the identities of personally familiar faces could be decoded across brains in the extended system for processing nonvisual information associated with faces. Our results show that personal interactions with the same individuals lead to shared neural representations of both the seen and unseen features that distinguish their identities

    Multiple Subject Barycentric Discriminant Analysis (MUSUBADA): How to Assign Scans to Categories without Using Spatial Normalization

    Get PDF
    We present a new discriminant analysis (DA) method called Multiple Subject Barycentric Discriminant Analysis (MUSUBADA) suited for analyzing fMRI data because it handles datasets with multiple participants that each provides different number of variables (i.e., voxels) that are themselves grouped into regions of interest (ROIs). Like DA, MUSUBADA (1) assigns observations to predefined categories, (2) gives factorial maps displaying observations and categories, and (3) optimally assigns observations to categories. MUSUBADA handles cases with more variables than observations and can project portions of the data table (e.g., subtables, which can represent participants or ROIs) on the factorial maps. Therefore MUSUBADA can analyze datasets with different voxel numbers per participant and, so does not require spatial normalization. MUSUBADA statistical inferences are implemented with cross-validation techniques (e.g., jackknife and bootstrap), its performance is evaluated with confusion matrices (for fixed and random models) and represented with prediction, tolerance, and confidence intervals. We present an example where we predict the image categories (houses, shoes, chairs, and human, monkey, dog, faces,) of images watched by participants whose brains were scanned. This example corresponds to a DA question in which the data table is made of subtables (one per subject) and with more variables than observations

    Processing of invisible social cues.

    Get PDF
    AbstractSuccessful interactions between people are dependent on rapid recognition of social cues. We investigated whether head direction – a powerful social signal – is processed in the absence of conscious awareness. We used continuous flash interocular suppression to render stimuli invisible and compared the reaction time for face detection when faces were turned towards the viewer and turned slightly away. We found that faces turned towards the viewer break through suppression faster than faces that are turned away, regardless of eye direction. Our results suggest that detection of a face with attention directed at the viewer occurs even in the absence of awareness of that face. While previous work has demonstrated that stimuli that signal threat are processed without awareness, our data suggest that the social relevance of a face, defined more broadly, is evaluated in the absence of awareness

    Neural Responses to Naturalistic Clips of Behaving Animals Under Two Different Task Contexts

    Get PDF
    The human brain rapidly deploys semantic information during perception to facilitate our interaction with the world. These semantic representations are encoded in the activity of distributed populations of neurons (Haxby et al., 2001; McClelland and Rogers, 2003; Kriegeskorte et al., 2008b) and command widespread cortical real estate (Binder et al., 2009; Huth et al., 2012). The neural representation of a stimulus can be described as a location (i.e., response vector) in a high-dimensional neural representational space (Kriegeskorte and Kievit, 2013; Haxby et al., 2014). This resonates with behavioral and theoretical work describing mental representations of objects and actions as being organized in a multidimensional psychological space (Attneave, 1950; Shepard, 1958, 1987; Edelman, 1998; Gärdenfors and Warglien, 2012). Current applications of this framework to neural representation (e.g., Kriegeskorte et al., 2008b) often implicitly assume that these neural representational spaces are relatively fixed and context-invariant. In contrast, earlier work emphasized the importance of attention and task demands in actively reshaping representational space (Shepard, 1964; Tversky, 1977; Nosofsky, 1986; Kruschke, 1992). A growing body of work in both electrophysiology (e.g., Sigala and Logothetis, 2002; Sigala, 2004; Cohen and Maunsell, 2009; Reynolds and Heeger, 2009) and human neuroimaging (e.g., Hon et al., 2009; Jehee et al., 2011; Brouwer and Heeger, 2013; Çukur et al., 2013; Sprague and Serences, 2013; Harel et al., 2014; Erez and Duncan, 2015; Nastase et al., 2017) has suggested mechanisms by which behavioral goals dynamically alter neural representation

    Multiple Subject Barycentric Discriminant Analysis (MUSUBADA): How to Assign Scans to Categories without Using Spatial Normalization

    Get PDF
    We present a new discriminant analysis (DA) method called Multiple Subject Barycentric Discriminant Analysis (MUSUBADA) suited for analyzing fMRI data because it handles datasets with multiple participants that each provides different number of variables (i.e., voxels) that are themselves grouped into regions of interest (ROIs). Like DA, MUSUBADA (1) assigns observations to predefined categories, (2) gives factorial maps displaying observations and categories, and (3) optimally assigns observations to categories. MUSUBADA handles cases with more variables than observations and can project portions of the data table (e.g., subtables, which can represent participants or ROIs) on the factorial maps. Therefore MUSUBADA can analyze datasets with different voxel numbers per participant and, so does not require spatial normalization. MUSUBADA statistical inferences are implemented with cross-validation techniques (e.g., jackknife and bootstrap), its performance is evaluated with confusion matrices (for fixed and random models) and represented with prediction, tolerance, and confidence intervals. We present an example where we predict the image categories (houses, shoes, chairs, and human, monkey, dog, faces,) of images watched by participants whose brains were scanned. This example corresponds to a DA question in which the data table is made of subtables (one per subject) and with more variables than observations

    The Neural Representation of Personally Familiar and Unfamiliar Faces in the Distributed System for Face Perception

    Get PDF
    Personally familiar faces are processed more robustly and efficiently than unfamiliar faces. The human face processing system comprises a core system that analyzes the visual appearance of faces and an extended system for the retrieval of person-knowledge and other nonvisual information. We applied multivariate pattern analysis to fMRI data to investigate aspects of familiarity that are shared by all familiar identities and information that distinguishes specific face identities from each other. Both identity-independent familiarity information and face identity could be decoded in an overlapping set of areas in the core and extended systems. Representational similarity analysis revealed a clear distinction between the two systems and a subdivision of the core system into ventral, dorsal and anterior components. This study provides evidence that activity in the extended system carries information about both individual identities and personal familiarity, while clarifying and extending the organization of the core system for face perception

    Modeling Semantic Encoding in a Common Neural Representational Space

    Get PDF
    Encoding models for mapping voxelwise semantic tuning are typically estimated separately for each individual, limiting their generalizability. In the current report, we develop a method for estimating semantic encoding models that generalize across individuals. Functional MRI was used to measure brain responses while participants freely viewed a naturalistic audiovisual movie. Word embeddings capturing agent-, action-, object-, and scene-related semantic content were assigned to each imaging volume based on an annotation of the film. We constructed both conventional within-subject semantic encoding models and between-subject models where the model was trained on a subset of participants and validated on a left-out participant. Between-subject models were trained using cortical surface-based anatomical normalization or surface-based whole-cortex hyperalignment. We used hyperalignment to project group data into an individual’s unique anatomical space via a common representational space, thus leveraging a larger volume of data for out-of-sample prediction while preserving the individual’s fine-grained functional–anatomical idiosyncrasies. Our findings demonstrate that anatomical normalization degrades the spatial specificity of between-subject encoding models relative to within-subject models. Hyperalignment, on the other hand, recovers the spatial specificity of semantic tuning lost during anatomical normalization, and yields model performance exceeding that of within-subject models

    Modeling Semantic Encoding in a Common Neural Representational Space

    Get PDF
    Encoding models for mapping voxelwise semantic tuning are typically estimated separately for each individual, limiting their generalizability. In the current report, we develop a method for estimating semantic encoding models that generalize across individuals. Functional MRI was used to measure brain responses while participants freely viewed a naturalistic audiovisual movie. Word embeddings capturing agent-, action-, object-, and scene-related semantic content were assigned to each imaging volume based on an annotation of the film. We constructed both conventional within-subject semantic encoding models and between-subject models where the model was trained on a subset of participants and validated on a left-out participant. Between-subject models were trained using cortical surface-based anatomical normalization or surface-based whole-cortex hyperalignment. We used hyperalignment to project group data into an individual’s unique anatomical space via a common representational space, thus leveraging a larger volume of data for out-of-sample prediction while preserving the individual’s fine-grained functional–anatomical idiosyncrasies. Our findings demonstrate that anatomical normalization degrades the spatial specificity of between-subject encoding models relative to within-subject models. Hyperalignment, on the other hand, recovers the spatial specificity of semantic tuning lost during anatomical normalization, and yields model performance exceeding that of within-subject models
    • …
    corecore