202,085 research outputs found

    Is social categorization spatially organized in a “Mental Line”? Empirical evidences for spatial bias in intergroup differentiation

    Get PDF
    Social categorization is the differentiation between the self and others and between one’s own group and other groups and it is such a natural and spontaneous process that often we are not aware of it. The way in which the brain organizes social categorization remains an unresolved issue. We present three experiments investigating the hypothesis that social categories are mentally ordered from left to right on an ingroup–outgroup continuum when membership is salient. To substantiate our hypothesis, we consider empirical evidence from two areas of psychology: research on differences in processing of ingroups and outgroups and research on the effects of spatial biases on processing of quantitative information (e.g., time; numbers) which appears to be arranged from left to right on a small–large continuum, an effect known as the spatial-numerical association of response codes (SNARC). In Experiments 1 and 2 we tested the hypothesis that when membership of a social category is activated, people implicitly locate ingroup categories to the left of a mental line whereas outgroup categories are located on the far right of the same mental line. This spatial organization persists even when stimuli are presented on one of the two sides of the screen and their (explicit) position is spatially incompatible with the implicit mental spatial organization of social categories (Experiment 3). Overall the results indicate that ingroups and outgroups are processed differently. The results are discussed with respect to social categorization theory, spatial agency bias, i.e., the effect observed in Western cultures whereby the agent of an action is mentally represented on the left and the recipient on the right, and the SNARC effec

    Neural codes for one’s own position and direction in a real-world “vista” environment

    Get PDF
    Humans, like animals, rely on an accurate knowledge of one’s spatial position and facing direction to keep orientated in the surrounding space. Although previous neuroimaging studies demonstrated that scene-selective regions (the parahippocampal place area or PPA, the occipital place area or OPA and the retrosplenial complex or RSC), and the hippocampus (HC) are implicated in coding position and facing direction within small-(room-sized) and large-scale navigational environments, little is known about how these regions represent these spatial quantities in a large open-field environment. Here, we used functional magnetic resonance imaging (fMRI) in humans to explore the neural codes of these navigationally-relevant information while participants viewed images which varied for position and facing direction within a familiar, real-world circular square. We observed neural adaptation for repeated directions in the HC, even if no navigational task was required. Further, we found that the amount of knowledge of the environment interacts with the PPA selectivity in encoding positions: individuals who needed more time to memorize positions in the square during a preliminary training task showed less neural attenuation in this scene-selective region. We also observed adaptation effects, which reflect the real distances between consecutive positions, in scene-selective regions but not in the HC. When examining the multi-voxel patterns of activity we observed that scene-responsive regions and the HC encoded both spatial information and that the RSC classification accuracy for positions was higher in individuals scoring higher to a self-reported questionnaire of spatial abilities. Our findings provide new insight into how the human brain represents a real, large-scale “vista” space, demonstrating the presence of neural codes for position and direction in both scene-selective and hippocampal regions, and revealing the existence, in the former regions, of a map-like spatial representation reflecting real-world distance between consecutive positions

    Speaker Normalization Using Cortical Strip Maps: A Neural Model for Steady State vowel Categorization

    Full text link
    Auditory signals of speech are speaker-dependent, but representations of language meaning are speaker-independent. The transformation from speaker-dependent to speaker-independent language representations enables speech to be learned and understood from different speakers. A neural model is presented that performs speaker normalization to generate a pitch-independent representation of speech sounds, while also preserving information about speaker identity. This speaker-invariant representation is categorized into unitized speech items, which input to sequential working memories whose distributed patterns can be categorized, or chunked, into syllable and word representations. The proposed model fits into an emerging model of auditory streaming and speech categorization. The auditory streaming and speaker normalization parts of the model both use multiple strip representations and asymmetric competitive circuits, thereby suggesting that these two circuits arose from similar neural designs. The normalized speech items are rapidly categorized and stably remembered by Adaptive Resonance Theory circuits. Simulations use synthesized steady-state vowels from the Peterson and Barney [J. Acoust. Soc. Am. 24, 175-184 (1952)] vowel database and achieve accuracy rates similar to those achieved by human listeners. These results are compared to behavioral data and other speaker normalization models.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    More than skin deep: body representation beyond primary somatosensory cortex

    Get PDF
    The neural circuits underlying initial sensory processing of somatic information are relatively well understood. In contrast, the processes that go beyond primary somatosensation to create more abstract representations related to the body are less clear. In this review, we focus on two classes of higher-order processing beyond somatosensation. Somatoperception refers to the process of perceiving the body itself, and particularly of ensuring somatic perceptual constancy. We review three key elements of somatoperception: (a) remapping information from the body surface into an egocentric reference frame (b) exteroceptive perception of objects in the external world through their contact with the body and (c) interoceptive percepts about the nature and state of the body itself. Somatorepresentation, in contrast, refers to the essentially cognitive process of constructing semantic knowledge and attitudes about the body, including: (d) lexical-semantic knowledge about bodies generally and one’s own body specifically, (e) configural knowledge about the structure of bodies, (f) emotions and attitudes directed towards one’s own body, and (g) the link between physical body and psychological self. We review a wide range of neuropsychological, neuroimaging and neurophysiological data to explore the dissociation between these different aspects of higher somatosensory function

    Linking Attention to Learning, Expectation, Competition, and Consciousness

    Full text link
    The concept of attention has been used in many senses, often without clarifying how or why attention works as it does. Attention, like consciousness, is often described in a disembodied way. The present article summarizes neural models and supportive data and how attention is linked to processes of learning, expectation, competition, and consciousness. A key them is that attention modulates cortical self-organization and stability. Perceptual and cognitive neocortex is organized into six main cell layers, with characteristic sub-lamina. Attention is part of unified design of bottom-up, horizontal, and top-down interactions among indentified cells in laminar cortical circuits. Neural models clarify how attention may be allocated during processes of visual perception, learning and search; auditory streaming and speech perception; movement target selection during sensory-motor control; mental imagery and fantasy; and hallucination during mental disorders, among other processes.Air Force Office of Scientific Research (F49620-01-1-0397); Office of Naval Research (N00014-01-1-0624

    The Complementary Brain: A Unifying View of Brain Specialization and Modularity

    Full text link
    Defense Advanced Research Projects Agency and Office of Naval Research (N00014-95-I-0409); National Science Foundation (ITI-97-20333); Office of Naval Research (N00014-95-I-0657

    The Complementary Brain: From Brain Dynamics To Conscious Experiences

    Full text link
    How do our brains so effectively achieve adaptive behavior in a changing world? Evidence is reviewed that brains are organized into parallel processing streams with complementary properties. Hierarchical interactions within each stream and parallel interactions between streams create coherent behavioral representations that overcome the complementary deficiencies of each stream and support unitary conscious experiences. This perspective suggests how brain design reflects the organization of the physical world with which brains interact, and suggests an alternative to the computer metaphor suggesting that brains are organized into independent modules. Examples from perception, learning, cognition, and action are described, and theoretical concepts and mechanisms by which complementarity is accomplished are summarized.Defense Advanced Research Projects and the Office of Naval Research (N00014-95-1-0409); National Science Foundation (ITI-97-20333); Office of Naval Research (N00014-95-1-0657

    Towards a Unified Theory of Neocortex: Laminar Cortical Circuits for Vision and Cognition

    Full text link
    A key goal of computational neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how laminar neocortical circuits give rise to biological intelligence. These circuits embody two new and revolutionary computational paradigms: Complementary Computing and Laminar Computing. Circuit properties include a novel synthesis of feedforward and feedback processing, of digital and analog processing, and of pre-attentive and attentive processing. This synthesis clarifies the appeal of Bayesian approaches but has a far greater predictive range that naturally extends to self-organizing processes. Examples from vision and cognition are summarized. A LAMINART architecture unifies properties of visual development, learning, perceptual grouping, attention, and 3D vision. A key modeling theme is that the mechanisms which enable development and learning to occur in a stable way imply properties of adult behavior. It is noted how higher-order attentional constraints can influence multiple cortical regions, and how spatial and object attention work together to learn view-invariant object categories. In particular, a form-fitting spatial attentional shroud can allow an emerging view-invariant object category to remain active while multiple view categories are associated with it during sequences of saccadic eye movements. Finally, the chapter summarizes recent work on the LIST PARSE model of cognitive information processing by the laminar circuits of prefrontal cortex. LIST PARSE models the short-term storage of event sequences in working memory, their unitization through learning into sequence, or list, chunks, and their read-out in planned sequential performance that is under volitional control. LIST PARSE provides a laminar embodiment of Item and Order working memories, also called Competitive Queuing models, that have been supported by both psychophysical and neurobiological data. These examples show how variations of a common laminar cortical design can embody properties of visual and cognitive intelligence that seem, at least on the surface, to be mechanistically unrelated.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624
    • 

    corecore