14 research outputs found

    Peripersonal space representation develops independently from visual experience

    Get PDF
    Our daily-life actions are typically driven by vision. When acting upon an object, we need to represent its visual features (e.g. shape, orientation, etc.) and to map them into our own peripersonal space. But what happens with people who have never had any visual experience? How can they map object features into their own peripersonal space? Do they do it differently from sighted agents? To tackle these questions, we carried out a series of behavioral experiments in sighted and congenitally blind subjects. We took advantage of a spatial alignment effect paradigm, which typically refers to a decrease of reaction times when subjects perform an action (e.g., a reach-To-grasp pantomime) congruent with that afforded by a presented object. To systematically examine peripersonal space mapping, we presented visual or auditory affording objects both within and outside subjects' reach. The results showed that sighted and congenitally blind subjects did not differ in mapping objects into their own peripersonal space. Strikingly, this mapping occurred also when objects were presented outside subjects' reach, but within the peripersonal space of another agent. This suggests that (the lack of) visual experience does not significantly affect the development of both one's own and others' peripersonal space representation

    The Remapping of Time by Active Tool-Use

    Get PDF
    Multiple, action-based space representations are each based on the extent to which action is possible toward a specific sector of space, such as near/reachable and far/unreachable. Studies on tool-use revealed how the boundaries between these representations are dynamic. Space is not only multidimensional and dynamic, but it is also known for interacting with other dimensions of magnitude, such as time. However, whether time operates on similar action-driven multiple representations and whether it can be modulated by tool-use is yet unknown. To address these issues, healthy participants performed a time bisection task in two spatial positions (near and far space) before and after an active tool-use training, which consisted of performing goal-directed actions holding a tool with their right hand (Experiment 1). Before training, perceived stimuli duration was influenced by their spatial position defined by action. Hence, a dissociation emerged between near/reachable and far/unreachable space. Strikingly, this dissociation disappeared after the active tool-use training since temporal stimuli were now perceived as nearer. The remapping was not found when a passive tool-training was executed (Experiment 2) or when the active tool-training was performed with participants’ left hand (Experiment 3). Moreover, no time remapping was observed following an equivalent active hand-training but without a tool (Experiment 4). Taken together, our findings reveal that time processing is based on action-driven multiple representations. The dynamic nature of these representations is demonstrated by the remapping of time, which is action- and effector-dependent

    Unilateral neglect and perceptual parsing: a large-group study.

    No full text
    Array-centred and subarray-centred neglect were disambiguated in a group of 116 patients with left neglect by means of a modified version of the Albert test in which the central column of segments was deleted so as to create two separate sets of targets grouped by proximity.The results indicated that neglect was more frequent in array- than subarray-centred coordinates and that, in a minority of cases, neglect co-occurred in both coordinate-systems. The two types of neglect were functionally but not anatomically dissociated. Presence of visual field defects was not prevalent in one type of neglect with respect to the other.These data contribute further evidence to previous single-case and small-group studies by showing that neglect can occur in single or multiple reference frames simultaneously, in agreement with current neuropsychological, neurophysiological and computational concepts of space representation
    corecore