442 research outputs found

    The neuroscience of vision-based grasping: a functional review for computational modeling and bio-inspired robotics

    Get PDF
    The topic of vision-based grasping is being widely studied using various techniques and with different goals in humans and in other primates. The fundamental related findings are reviewed in this paper, with the aim of providing researchers from different fields, including intelligent robotics and neural computation, a comprehensive but accessible view on the subject. A detailed description of the principal sensorimotor processes and the brain areas involved in them is provided following a functional perspective, in order to make this survey especially useful for computational modeling and bio-inspired robotic application

    The cognitive neuroscience of prehension: recent developments

    Get PDF
    Prehension, the capacity to reach and grasp, is the key behavior that allows humans to change their environment. It continues to serve as a remarkable experimental test case for probing the cognitive architecture of goal-oriented action. This review focuses on recent experimental evidence that enhances or modifies how we might conceptualize the neural substrates of prehension. Emphasis is placed on studies that consider how precision grasps are selected and transformed into motor commands. Then, the mechanisms that extract action relevant information from vision and touch are considered. These include consideration of how parallel perceptual networks within parietal cortex, along with the ventral stream, are connected and share information to achieve common motor goals. On-line control of grasping action is discussed within a state estimation framework. The review ends with a consideration about how prehension fits within larger action repertoires that solve more complex goals and the possible cortical architectures needed to organize these actions

    Neuroimaging of amblyopia and binocular vision: a review

    Get PDF
    Amblyopia is a cerebral visual impairment considered to derive from abnormal visual experience (e.g., strabismus, anisometropia). Amblyopia, first considered as a monocular disorder, is now often seen as a primarily binocular disorder resulting in more and more studies examining the binocular deficits in the patients. The neural mechanisms of amblyopia are not completely understood even though they have been investigated with electrophysiological recordings in animal models and more recently with neuroimaging techniques in humans. In this review, we summarize the current knowledge about the brain regions that underlie the visual deficits associated with amblyopia with a focus on binocular vision using functional magnetic resonance imaging. The first studies focused on abnormal responses in the primary and secondary visual areas whereas recent evidence shows that there are also deficits at higher levels of the visual pathways within the parieto-occipital and temporal cortices. These higher level areas are part of the cortical network involved in 3D vision from binocular cues. Therefore, reduced responses in these areas could be related to the impaired binocular vision in amblyopic patients. Promising new binocular treatments might at least partially correct the activation in these areas. Future neuroimaging experiments could help to characterize the brain response changes associated with these treatments and help devise them

    Parietal maps of visual signals for bodily action planning

    Get PDF
    The posterior parietal cortex (PPC) has long been understood as a high-level integrative station for computing motor commands for the body based on sensory (i.e., mostly tactile and visual) input from the outside world. In the last decade, accumulating evidence has shown that the parietal areas not only extract the pragmatic features of manipulable objects, but also subserve sensorimotor processing of others’ actions. A paradigmatic case is that of the anterior intraparietal area (AIP), which encodes the identity of observed manipulative actions that afford potential motor actions the observer could perform in response to them. On these bases, we propose an AIP manipulative action-based template of the general planning functions of the PPC and review existing evidence supporting the extension of this model to other PPC regions and to a wider set of actions: defensive and locomotor actions. In our model, a hallmark of PPC functioning is the processing of information about the physical and social world to encode potential bodily actions appropriate for the current context. We further extend the model to actions performed with man-made objects (e.g., tools) and artifacts, because they become integral parts of the subject’s body schema and motor repertoire. Finally, we conclude that existing evidence supports a generally conserved neural circuitry that transforms integrated sensory signals into the variety of bodily actions that primates are capable of preparing and performing to interact with their physical and social world

    The Causal Role of Three Frontal Cortical Areas in Grasping

    Get PDF
    Efficient object grasping requires the continuous control of arm and hand movements based on visual information. Previous studies have identified a network of parietal and frontal areas that is crucial for the visual control of prehension movements. Electrical microstimulation of 3D shape-selective clusters in AIP during functional magnetic resonance imaging activates areas F5a and 45B, suggesting that these frontal areas may represent important downstream areas for object processing during grasping, but the role of area F5a and 45B in grasping is unknown. To assess their causal role in the frontal grasping network, we reversibly inactivated 45B, F5a, and F5p during visually guided grasping in macaque monkeys. First, we recorded single neuron activity in 45B, F5a, and F5p to identify sites with object responses during grasping. Then, we injected muscimol or saline to measure the grasping deficit induced by the temporary disruption of each of these three nodes in the grasping network. The inactivation of all three areas resulted in a significant increase in the grasping time in both animals, with the strongest effect observed in area F5p. These results not only confirm a clear involvement of F5p, but also indicate causal contributions of area F5a and 45B in visually guided object grasping

    The causal role of three frontal cortical areas in grasping

    Get PDF
    Efficient object grasping requires the continuous control of arm and hand movements based on visual information. Previous studies have identified a network of parietal and frontal areas that is crucial for the visual control of prehension movements. Electrical microstimulation of 3D shape-selective clusters in AIP during fMRI activates areas F5a and 45B, suggesting that these frontal areas may represent important downstream areas for object processing during grasping, but the role of area F5a and 45B in grasping is unknown. To assess their causal role in the frontal grasping network, we reversibly inactivated 45B, F5a and F5p during visually-guided grasping in macaque monkeys. First, we recorded single neuron activity in 45B, F5a and F5p to identify sites with object responses during grasping. Then, we injected muscimol or saline to measure the grasping deficit induced by the temporary disruption of each of these three nodes in the grasping network. The inactivation of all three areas resulted in a significant increase in the grasping time in both animals, with the strongest effect observed in area F5p. These results not only confirm a clear involvement of F5p, but also indicate causal contributions of area F5a and 45B in visually-guided object grasping

    Three-dimensional eye position signals shape both peripersonal space and arm movement activity in the medial posterior parietal cortex

    Get PDF
    Research conducted over the last decades has established that the medial part of posterior parietal cortex (PPC) is crucial for controlling visually guided actions in human and non-human primates. Within this cortical sector there is area V6A, a crucial node of the parietofrontal network involved in arm movement control in both monkeys and humans. However, the encoding of action-in-depth by V6A cells had been not studied till recently. Recent neurophysiological studies show the existence in V6A neurons of signals related to the distance of targets from the eyes. These signals are integrated, often at the level of single cells, with information about the direction of gaze, thus encoding spatial location in 3D space. Moreover, 3D eye position signals seem to be further exploited at two additional levels of neural processing: (a) in determining whether targets are located in the peripersonal space or not, and (b) in shaping the spatial tuning of arm movement related activity toward reachable targets. These findings are in line with studies in putative homolog regions in humans and together point to a role of medial PPC in encoding both the vergence angle of the eyes and peripersonal space. Besides its role in spatial encoding also in depth, several findings demonstrate the involvement of this cortical sector in non-spatial processes

    The Extraction of Depth Structure from Shading and Texture in the Macaque Brain

    Get PDF
    We used contrast-agent enhanced functional magnetic resonance imaging (fMRI) in the alert monkey to map the cortical regions involved in the extraction of 3D shape from the monocular static cues, texture and shading. As in the parallel human imaging study [1], we contrasted the 3D condition to several 2D control conditions. The extraction of 3D shape from texture (3D SfT) involves both ventral and parietal regions, in addition to early visual areas. Strongest activation was observed in CIP, with decreasing strength towards the anterior part of the intraparietal sulcus (IPS). In the ventral stream 3D SfT sensitivity was observed in a ventral portion of TEO. The extraction of 3D shape from shading (3D SfS) involved predominantly ventral regions, such as V4 and a dorsal potion of TEO. These results are similar to those obtained earlier in human subjects and indicate that the extraction of 3D shape from texture is performed in both ventral and dorsal regions for both species, as are the motion and disparity cues, whereas shading is mainly processed in the ventral stream
    corecore