58 research outputs found

    Neural correlates of grasping

    Get PDF
    Prehension, the capacity to reach and grasp objects, comprises two main components: reaching, i.e., moving the hand towards an object, and grasping, i.e., shaping the hand with respect to its properties. Knowledge of this topic has gained a huge advance in recent years, dramatically changing our view on how prehension is represented within the dorsal stream. While our understanding of the various nodes coding the grasp component is rapidly progressing, little is known of the integration between grasping and reaching. With this Mini Review we aim to provide an up-to-date overview of the recent developments on the coding of prehension. We will start with a description of the regions coding various aspects of grasping in humans and monkeys, delineating where it might be integrated with reaching. To gain insights into the causal role of these nodes in the coding of prehension, we will link this functional description to lesion studies. Finally, we will discuss future directions that might be promising to unveil new insights on the coding of prehension movements

    Decoding Movement Goals from the Fronto-Parietal Reach Network

    Get PDF
    During reach planning, fronto-parietal brain areas need to transform sensory information into a motor code. It is debated whether these areas maintain a sensory representation of the visual cue or a motor representation of the upcoming movement goal. Here, we present results from a delayed pro-/anti-reach task which allowed for dissociating the position of the visual cue from the reach goal. In this task, the visual cue was combined with a context rule (pro vs. anti) to infer the movement goal. Different levels of movement goal specification during the delay were obtained by presenting the context rule either before the delay together with the visual cue (specified movement goal) or after the delay (underspecified movement goal). By applying functional magnetic resonance imaging (fMRI) multivoxel pattern analysis (MVPA), we demonstrate movement goal encoding in the left dorsal premotor cortex (PMd) and bilateral superior parietal lobule (SPL) when the reach goal is specified. This suggests that fronto-parietal reach regions (PRRs) maintain a prospective motor code during reach planning. When the reach goal is underspecified, only area PMd but not SPL represents the visual cue position indicating an incomplete state of sensorimotor integration. Moreover, this result suggests a potential role of PMd in movement goal selection

    Visual imagery during real-time fMRI neurofeedback from occipital and superior parietal cortex

    Get PDF
    Abstract Visual imagery has been suggested to recruit occipital cortex via feedback projections from fronto-parietal regions, suggesting that these feedback projections might be exploited to boost recruitment of occipital cortex by means of real-time neurofeedback. To test this prediction, we instructed a group of healthy participants to perform peripheral visual imagery while they received real-time auditory feedback based on the BOLD signal from either early visual cortex or the medial superior parietal lobe. We examined the amplitude and temporal aspects of the BOLD response in the two regions. Moreover, we compared the impact of self-rated mental focus and vividness of visual imagery on the BOLD responses in these two areas. We found that both early visual cortex and the medial superior parietal cortex are susceptible to auditory neurofeedback within a single feedback session per region. However, the signal in parietal cortex was sustained for a longer time compared to the signal in occipital cortex. Moreover, the BOLD signal in the medial superior parietal lobe was more affected by focus and vividness of the visual imagery than early visual cortex. Our results thus demonstrate that (a) participants can learn to self-regulate the BOLD signal in early visual and parietal cortex within a single session, (b) that different nodes in the visual imagery network respond differently to neurofeedback, and that (c) responses in parietal, but not in occipital cortex are susceptible to self-rated vividness of mental imagery. Together, these results suggest that medial superior parietal cortex might be a suitable candidate to provide real-time feedback to patients suffering from visual field defects

    Visual search without central vision – no single pseudofovea location is best

    Get PDF
    We typically fixate targets such that they are projected onto the fovea for best spatial resolution. Macular degeneration patients often develop fixation strategies such that targets are projected to an intact eccentric part of the retina, called pseudofovea. A longstanding debate concerns which pseudofovea-location is optimal for non-foveal vision. We examined how pseudofovea position and eccentricity affect performance in visual search, when vision is restricted to an off-foveal retinal region by a gaze-contingent display that dynamically blurs the stimulus except within a small viewing window (forced field location). Trained normally sighted participants were more accurate when forced field location was congruent with the required scan path direction; this contradicts the view that a single pseudofovea location is generally best. Rather, performance depends on the congruence between pseudofovea location and scan path direction

    The characterization of actions at the superordinate, basic and subordinate level

    Get PDF
    Objects can be categorized at different levels of abstraction, ranging from the superordinate (e.g., fruit) and the basic (e.g., apple) to the subordinate level (e.g., golden delicious). The basic level is assumed to play a key role in categorization, e.g., in terms of the number of features used to describe these actions and the speed of processing. To which degree do these principles also apply to the categorization of observed actions? To address this question, we first selected a range of actions at the superordinate (e.g., locomotion), basic (e.g., to swim) and subordinate level (e.g., to swim breaststroke), using verbal material (Experiments 1–3). Experiments 4–6 aimed to determine the characteristics of these actions across the three taxonomic levels. Using a feature listing paradigm (Experiment 4), we determined the number of features that were provided by at least six out of twenty participants (common features), separately for the three different levels. In addition, we examined the number of shared (i.e., provided for more than one category) and distinct (i.e., provided for one category only) features. Participants produced the highest number of common features for actions at the basic level. Actions at the subordinate level shared more features with other actions at the same level than those at the superordinate level. Actions at the superordinate and basic level were described with more distinct features compared to those provided at the subordinate level. Using an auditory priming paradigm (Experiment 5), we observed that participants responded faster to action images preceded by a matching auditory cue corresponding to the basic and subordinate level, but not for superordinate level cues, suggesting that the basic level is the most abstract level at which verbal cues facilitate the processing of an upcoming action. Using a category verification task (Experiment 6), we found that participants were faster and more accurate to verify action categories (depicted as images) at the basic and subordinate level in comparison to the superordinate level. Together, in line with the object categorization literature, our results suggest that information about action categories is maximized at the basic level

    Decoding Actions at Different Levels of Abstraction

    Get PDF
    • …
    corecore