112 research outputs found

    A neural surveyor to map touch on the body

    Get PDF
    Perhaps the most recognizable sensory map in all of neuroscience is the somatosensory homunculus. Although it seems straightforward, this simple representation belies the complex link between an activation in a somatotopic map and the associated touch location on the body. Any isolated activation is spatially ambiguous without a neural decoder that can read its position within the entire map, but how this is computed by neural networks is unknown. We propose that the somatosensory system implements multilateration, a common computation used by surveying and global positioning systems to localize objects. Specifically, to decode touch location on the body, multilateration estimates the relative distance between the afferent input and the boundaries of a body part (e.g., the joints of a limb). We show that a simple feedforward neural network, which captures several fundamental receptive field properties of cortical somatosensory neurons, can implement a Bayes-optimal multilateral computation. Simulations demonstrated that this decoder produced a pattern of localization variability between two boundaries that was unique to multilateration. Finally, we identify this computational signature of multilateration in actual psychophysical experiments, suggesting that it is a candidate computational mechanism underlying tactile localization

    The Role of Motor Learning in Spatial Adaptation near a Tool

    Get PDF
    Some visual-tactile (bimodal) cells have visual receptive fields (vRFs) that overlap and extend moderately beyond the skin of the hand. Neurophysiological evidence suggests, however, that a vRF will grow to encompass a hand-held tool following active tool use but not after passive holding. Why does active tool use, and not passive holding, lead to spatial adaptation near a tool? We asked whether spatial adaptation could be the result of motor or visual experience with the tool, and we distinguished between these alternatives by isolating motor from visual experience with the tool. Participants learned to use a novel, weighted tool. The active training group received both motor and visual experience with the tool, the passive training group received visual experience with the tool, but no motor experience, and finally, a no-training control group received neither visual nor motor experience using the tool. After training, we used a cueing paradigm to measure how quickly participants detected targets, varying whether the tool was placed near or far from the target display. Only the active training group detected targets more quickly when the tool was placed near, rather than far, from the target display. This effect of tool location was not present for either the passive-training or control groups. These results suggest that motor learning influences how visual space around the tool is represented

    The effects of visual control and distance in modulating peripersonal spatial representation

    Get PDF
    In the presence of vision, finalized motor acts can trigger spatial remapping, i.e., reference frames transformations to allow for a better interaction with targets. However, it is yet unclear how the peripersonal space is encoded and remapped depending on the availability of visual feedback and on the target position within the individual’s reachable space, and which cerebral areas subserve such processes. Here, functional magnetic resonance imaging (fMRI) was used to examine neural activity while healthy young participants performed reach-to-grasp movements with and without visual feedback and at different distances of the target from the effector (near to the hand–about 15 cm from the starting position–vs. far from the hand–about 30 cm from the starting position). Brain response in the superior parietal lobule bilaterally, in the right dorsal premotor cortex, and in the anterior part of the right inferior parietal lobule was significantly greater during visually-guided grasping of targets located at the far distance compared to grasping of targets located near to the hand. In the absence of visual feedback, the inferior parietal lobule exhibited a greater activity during grasping of targets at the near compared to the far distance. Results suggest that in the presence of visual feedback, a visuo-motor circuit integrates visuo-motor information when targets are located farther away. Conversely in the absence of visual feedback, encoding of space may demand multisensory remapping processes, even in the case of more proximal targets

    Impaired delayed but preserved immediate grasping in a neglect patient with parieto-occipital lesions

    Get PDF
    Patients with optic ataxia, a deficit in visually guided action, paradoxically improve when pantomiming an action towards memorized stimuli. Visual form agnosic patient D.F. shows the exact opposite pattern of results: although being able to grasp objects in real-time she loses grip scaling when grasping an object from memory. Here we explored the dissociation between immediate and delayed grasping in a patient (F.S.) who after a parietal-occipital stroke presented with severe left visual neglect, a loss of awareness of the contralesional side of space. Although F.S. had preserved grip scaling even in his neglected field, he was markedly impaired when asked to pretend to grasp a leftward object from memory. Critically, his deficit cannot be simply explained by the absence of continuous on-line visual feedback, as F.S. was also able to grasp leftward objects in real-time when vision was removed. We suggest that regions surrounding the parietal-occipital sulcus, typically damaged in patients with optic ataxia but spared in F.S., seem to be essential for real-time actions. On the other hand, our data indicates that regions in the ventral visual stream, damaged in D.F but intact in F.S., would appear to be necessary but not sufficient for memory-guided action

    Simulation Modifies Prehension: Evidence for a Conjoined Representation of the Graspable Features of an Object and the Action of Grasping It

    Get PDF
    Movement formulas, engrams, kinesthetic images and internal models of the body in action are notions derived mostly from clinical observations of brain-damaged subjects. They also suggest that the prehensile geometry of an object is integrated in the neural circuits and includes the object's graspable characteristics as well as its semantic properties. In order to determine whether there is a conjoined representation of the graspable characteristics of an object in relation to the actual grasping, it is necessary to separate the graspable (low-level) from the semantic (high-level) properties of the object. Right-handed subjects were asked to grasp and lift a smooth 300-g cylinder with one hand, before and after judging the level of difficulty of a “grasping for pouring” action, involving a smaller cylinder and using the opposite hand. The results showed that simulated grasps with the right hand exert a direct influence on actual motor acts with the left hand. These observations add to the evidence that there is a conjoined representation of the graspable characteristics of the object and the biomechanical constraints of the arm

    Influence of Motor Planning on Distance Perception within the Peripersonal Space

    Get PDF
    We examined whether movement costs as defined by movement magnitude have an impact on distance perception in near space. In Experiment 1, participants were given a numerical cue regarding the amplitude of a hand movement to be carried out. Before the movement execution, the length of a visual distance had to be judged. These visual distances were judged to be larger, the larger the amplitude of the concurrently prepared hand movement was. In Experiment 2, in which numerical cues were merely memorized without concurrent movement planning, this general increase of distance with cue size was not observed. The results of these experiments indicate that visual perception of near space is specifically affected by the costs of planned hand movements

    Fix Your Eyes in the Space You Could Reach: Neurons in the Macaque Medial Parietal Cortex Prefer Gaze Positions in Peripersonal Space

    Get PDF
    Interacting in the peripersonal space requires coordinated arm and eye movements to visual targets in depth. In primates, the medial posterior parietal cortex (PPC) represents a crucial node in the process of visual-to-motor signal transformations. The medial PPC area V6A is a key region engaged in the control of these processes because it jointly processes visual information, eye position and arm movement related signals. However, to date, there is no evidence in the medial PPC of spatial encoding in three dimensions. Here, using single neuron recordings in behaving macaques, we studied the neural signals related to binocular eye position in a task that required the monkeys to perform saccades and fixate targets at different locations in peripersonal and extrapersonal space. A significant proportion of neurons were modulated by both gaze direction and depth, i.e., by the location of the foveated target in 3D space. The population activity of these neurons displayed a strong preference for peripersonal space in a time interval around the saccade that preceded fixation and during fixation as well. This preference for targets within reaching distance during both target capturing and fixation suggests that binocular eye position signals are implemented functionally in V6A to support its role in reaching and grasping

    The Remapping of Time by Active Tool-Use

    Get PDF
    Multiple, action-based space representations are each based on the extent to which action is possible toward a specific sector of space, such as near/reachable and far/unreachable. Studies on tool-use revealed how the boundaries between these representations are dynamic. Space is not only multidimensional and dynamic, but it is also known for interacting with other dimensions of magnitude, such as time. However, whether time operates on similar action-driven multiple representations and whether it can be modulated by tool-use is yet unknown. To address these issues, healthy participants performed a time bisection task in two spatial positions (near and far space) before and after an active tool-use training, which consisted of performing goal-directed actions holding a tool with their right hand (Experiment 1). Before training, perceived stimuli duration was influenced by their spatial position defined by action. Hence, a dissociation emerged between near/reachable and far/unreachable space. Strikingly, this dissociation disappeared after the active tool-use training since temporal stimuli were now perceived as nearer. The remapping was not found when a passive tool-training was executed (Experiment 2) or when the active tool-training was performed with participants’ left hand (Experiment 3). Moreover, no time remapping was observed following an equivalent active hand-training but without a tool (Experiment 4). Taken together, our findings reveal that time processing is based on action-driven multiple representations. The dynamic nature of these representations is demonstrated by the remapping of time, which is action- and effector-dependent
    corecore