952 research outputs found

    No gender differences in egocentric and allocentric environmental transformation after compensating for male advantage by manipulating familiarity

    Get PDF
    The present study has two-fold aims: to investigate whether gender differences persist even when more time is given to acquire spatial information; to assess the gender effect when the retrieval phase requires recalling the pathway from the same or a different reference perspective (egocentric or allocentric). Specifically, we analyse the performance of men and women while learning a path from a map or by observing an experimenter in a real environment. We then asked them to reproduce the learned path using the same reference system (map learning vs. map retrieval or real environment learning vs. real environment retrieval) or using a different reference system (map learning vs. real environment retrieval or vice versa). The results showed that gender differences were not present in the retrieval phase when women have the necessary time to acquire spatial information. Moreover, using the egocentric coordinates (both in the learning and retrieval phase) proved easier than the other conditions, whereas learning through allocentric coordinates and then retrieving the environmental information using egocentric coordinates proved to be the most difficult. Results showed that by manipulating familiarity, gender differences disappear, or are attenuated in all conditions

    Perception and action without 3D coordinate frames

    Get PDF
    Neuroscientists commonly assume that the brain generates representations of a scene in various non-retinotopic 3D coordinate frames, for example in 'egocentric' and 'allocentric' frames. Although neurons in early visual cortex might be described as representing a scene in an eye-centred frame, using 2 dimensions of visual direction and one of binocular disparity, there is no convincing evidence of similarly organized cortical areas using non-retinotopic 3D coordinate frames nor of any systematic transfer of information from one frame to another. We propose that perception and action in a 3D world could be achieved without generating ego- or allocentric 3D coordinate frames. Instead, we suggest that the fundamental operation the brain carries out is to compare a long state vector with a matrix of weights (essentially, a long look-up table) to choose an output (often, but not necessarily, a motor output). The processes involved in perception of a 3D scene and action within it depend, we suggest, on successive iterations of this basic operation. Advantages of this proposal include the fact that it relies on computationally well-defined operations corresponding to well-established neural processes. Also, we argue that from a philosophical perspective it is at least as plausible as theories postulating 3D coordinate frames. Finally, we suggest a variety of experiments that would falsify our claim

    Perception and action without 3D coordinate frames

    Get PDF
    Neuroscientists commonly assume that the brain generates representations of a scene in various non-retinotopic 3D coordinate frames, for example in 'egocentric' and 'allocentric' frames. Although neurons in early visual cortex might be described as representing a scene in an eye-centred frame, using 2 dimensions of visual direction and one of binocular disparity, there is no convincing evidence of similarly organized cortical areas using non-retinotopic 3D coordinate frames nor of any systematic transfer of information from one frame to another. We propose that perception and action in a 3D world could be achieved without generating ego- or allocentric 3D coordinate frames. Instead, we suggest that the fundamental operation the brain carries out is to compare a long state vector with a matrix of weights (essentially, a long look-up table) to choose an output (often, but not necessarily, a motor output). The processes involved in perception of a 3D scene and action within it depend, we suggest, on successive iterations of this basic operation. Advantages of this proposal include the fact that it relies on computationally well-defined operations corresponding to well-established neural processes. Also, we argue that from a philosophical perspective it is at least as plausible as theories postulating 3D coordinate frames. Finally, we suggest a variety of experiments that would falsify our claim

    The fish in the creek is sentient, even if I can’t speak with it

    Get PDF
    In this paper I argue that Velmens’ reflexive model of perceptual consciousness is useful for understanding the first-person perspective and sentience in animals. I then offer a defense of the proposal that ray-finned bony fish have a first-person perspective and sentience. This defense has two prongs. The first prong is presence of a substantial body of evidence that the neuroanatomy of the fish brain exhibits basic organizational principles associated with consciousness in mammals. These principles include a relationship between a second-order sensory relay, the preglomerular complex, and the fish pallium which bears a resemblance to the relationship between the mammalian thalamus and the neocortex, the existence of feedback/feedforward and reentrant circuitry in the pallium, and structural and functional differences among divisions of the fish pallium. The second prong is the existence of behaviors in fish that exhibit significant flexibility in the presence of environmental change and require relational learning among stimuli distributed in space, over time, or both. I conclude that, although they are instantiated differently, a first-person perspective and sentience are present in fish

    Spatial memory for vertical locations

    Get PDF
    Most studies on spatial memory refer to the horizontal plane, leaving an open question as to whether findings generalize to vertical spaces where gravity and the visual upright of our surrounding space are salient orientation cues. In three experiments, we examined which reference frame is used to organize memory for vertical locations: the one based on the body vertical, the visual-room vertical, or the direction of gravity. Participants judged interobject spatial relationships learned from a vertical layout in a virtual room. During learning and testing, we varied the orientation of the participant’s body (upright vs. lying sideways) and the visually presented room relative to gravity (e.g., rotated by 90° along the frontal plane). Across all experiments, participants made quicker or more accurate judgments when the room was oriented in the same way as during learning with respect to their body, irrespective of their orientations relative to gravity. This suggests that participants employed an egocentric body-based reference frame for representing vertical object locations. Our study also revealed an effect of body–gravity alignment during testing. Participants recalled spatial relations more accurately when upright, regardless of the body and visual-room orientation during learning. This finding is consistent with a hypothesis of selection conflict between different reference frames. Overall, our results suggest that a body-based reference frame is preferred over salient allocentric reference frames in memory for vertical locations perceived from a single view. Further, memory of vertical space seems to be tuned to work best in the default upright body orientation

    Doing the opposite to what another person is doing.

    Get PDF
    The three studies presented here aim to contribute to a better understanding of the role of the coordinate system of a person's body and of the environment in spatial organization underlying the recognition and production of gestures. The paper introduces a new approach by investigating what people consider to be opposite gestures in addition to identical gestures. It also suggests a new point of view setting the issue in the framework of egocentric versus allocentric spatial encoding as compared to the anatomical versus non-anatomical matching which is usually adopted in the literature. The results showed that the role of the allocentric system as a key player was much more evident when participants were asked to \u201cdo the opposite\u201d as compared to when they imitated which indicates that the two tasks really are different from each other. Response times were also quicker when people \u201cdid the opposite\u201d indicating that this is an immediate response and not the result of \u201creversing an imitation\u201d. These findings suggest that the issue of how the oppositional structure of space impacts on human perception and the performance of gestures has probably been underestimated in an area of research which traditionally focuses exclusively on imitation

    The Fish in the Creek Is Sentient, Even if I Can’t Speak With It

    Get PDF
    In this paper I argue that Velmens’ reflexive model of perceptual consciousness is useful for understanding the first-person perspective and sentience in animals. I then offer a defense of the proposal that ray-finned bony fish have a first-person perspective and sentience. This defense has two prongs. The first prong is presence of a substantial body of evidence that the neuroanatomy of the fish brain exhibits basic organizational principles associated with consciousness in mammals. These principles include a relationship between a second-order sensory relay, the preglomerular complex, and the fish pallium which bears a resemblance to the relationship between the mammalian thalamus and the neocortex, the existence of feedback/feedforward and reentrant circuitry in the pallium, and structural and functional differences among divisions of the fish pallium. The second prong is the existence of behaviors in fish that exhibit significant flexibility in the presence of environmental change and require relational learning among stimuli distributed in space, over time, or both. I conclude that, although they are instantiated differently, a first-person perspective and sentience are present in fish

    How environment and self-motion combine in neural representations of space

    Get PDF
    Estimates of location or orientation can be constructed solely from sensory information representing environmental cues. In unfamiliar or sensory-poor environments, these estimates can also be maintained and updated by integrating self-motion information. However, the accumulation of error dictates that updated representations of heading direction and location become progressively less reliable over time, and must be corrected by environmental sensory inputs when available. Anatomical, electrophysiological and behavioural evidence indicates that angular and translational path integration contributes to the firing of head direction cells and grid cells. We discuss how sensory inputs may be combined with self-motion information in the firing patterns of these cells. For head direction cells, direct projections from egocentric sensory representations of distal cues can help to correct cumulative errors. Grid cells may benefit from sensory inputs via boundary vector cells and place cells. However, the allocentric code of boundary vector cells and place cells requires consistent head-direction information in order to translate the sensory signal of egocentric boundary distance into allocentric boundary vector cell firing, suggesting that the different spatial representations found in and around the hippocampal formation are interdependent. We conclude that, rather than representing pure path integration, the firing of head-direction cells and grid cells reflects the interface between self-motion and environmental sensory information. Together with place cells and boundary vector cells they can support a coherent unitary representation of space based on both environmental sensory inputs and path integration signals

    Giving a helping hand: effects of joint attention on mental rotation of body parts

    Get PDF
    Research on joint attention has addressed both the effects of gaze following and the ability to share representations. It is largely unknown, however, whether sharing attention also affects the perceptual processing of jointly attended objects. This study tested whether attending to stimuli with another person from opposite perspectives induces a tendency to adopt an allocentric rather than an egocentric reference frame. Pairs of participants performed a handedness task while individually or jointly attending to rotated hand stimuli from opposite sides. Results revealed a significant flattening of the performance rotation curve when participants attended jointly (experiment 1). The effect of joint attention was robust to manipulations of social interaction (cooperation versus competition, experiment 2), but was modulated by the extent to which an allocentric reference frame was primed (experiment 3). Thus, attending to objects together from opposite perspectives makes people adopt an allocentric rather than the default egocentric reference frame
    • 

    corecore