49 research outputs found

    Decoding motor intentions from human brain activity

    Get PDF
    “You read my mind.” Although this simple everyday expression implies ‘knowledge or understanding’ of another’s thinking, true ‘mind-reading’ capabilities implicitly seem constrained to the domains of Hollywood and science-fiction. In the field of sensorimotor neuroscience, however, significant progress in this area has come from mapping characteristic changes in brain activity that occur prior to an action being initiated. For instance, invasive neural recordings in non-human primates have significantly increased our understanding of how highly cognitive and abstract processes like intentions and decisions are represented in the brain by showing that it is possible to decode or ‘predict’ upcoming sensorimotor behaviors (e.g., movements of the arm/eyes) based on preceding changes in the neuronal output of parieto-frontal cortex, a network of areas critical for motor planning. In the human brain, however, a successful counterpart for this predictive ability and a similar detailed understanding of intention-related signals in parieto-frontal cortex have remained largely unattainable due to the limitations of non-invasive brain mapping techniques like functional magnetic resonance imaging (fMRI). Knowing how and where in the human brain intentions or plans for action are coded is not only important for understanding the neuroanatomical organization and cortical mechanisms that govern goal-directed behaviours like reaching, grasping and looking – movements critical to our interactions with the world – but also for understanding homologies between human and non-human primate brain areas, allowing the transfer of neural findings between species. In the current thesis, I employed multi-voxel pattern analysis (MVPA), a new fMRI technique that has made it possible to examine the coding of neural information at a more fine-grained level than that previously available. I used fMRI MVPA to examine how and where movement intentions are coded in human parieto-frontal cortex and specifically asked the question: What types of predictive information about a subject\u27s upcoming movement can be decoded from preceding changes in neural activity? Project 1 first used fMRI MVPA to determine, largely as a proof-of-concept, whether or not specific object-directed hand actions (grasps and reaches) could be predicted from intention-related brain activity patterns. Next, Project 2 examined whether effector-specific (arm vs. eye) movement plans along with their intended directions (left vs. right) could also be decoded prior to movement. Lastly, Project 3 examined exactly where in the human brain higher-level movement goals were represented independently from how those goals were to be implemented. To this aim, Project 3 had subjects either grasp or reach toward an object (two different motor goals) using either their hand or a novel tool (with kinematics opposite to those of the hand). In this way, the goal of the action (grasping vs. reaching) could be maintained across actions, but the way in which those actions were kinematically achieved changed in accordance with the effector (hand or tool). All three projects employed a similar event-related delayed-movement fMRI paradigm that separated in time planning and execution neural responses, allowing us to isolate the preparatory patterns of brain activity that form prior to movement. Project 1 found that the plan-related activity patterns in several parieto-frontal brain regions were predictive of different upcoming hand movements (grasps vs. reaches). Moreover, we found that several parieto-frontal brain regions, similar to that only previously demonstrated in non-human primates, could actually be characterized according to the types of movements they can decode. Project 2 found a variety of functional subdivisions: some parieto-frontal areas discriminated movement plans for the different reach directions, some for the different eye movement directions, and a few areas accurately predicted upcoming directional movements for both the hand and eye. This latter finding demonstrates -- similar to that shown previously in non-human primates -- that some brain areas code for the end motor goal (i.e., target location) independent of effector used. Project 3 identified regions that decoded upcoming hand actions only, upcoming tool actions only, and rather interestingly, areas that predicted actions with both effectors (hand and tool). Notably, some of these latter areas were found to represent the higher-level goals of the movement (grasping vs. reaching) instead of the specific lower-level kinematics (hand vs. tool) necessary to implement those goals. Taken together, these findings offer substantial new insights into the types of intention-related signals contained in human brain activity patterns and specify a hierarchical neural architecture spanning parieto-frontal cortex that guides the construction of complex object-directed behaviors

    Planning Ahead: Object-Directed Sequential Actions Decoded from Human Frontoparietal and Occipitotemporal Networks.

    Get PDF
    Object-manipulation tasks (e.g., drinking from a cup) typically involve sequencing together a series of distinct motor acts (e.g., reaching toward, grasping, lifting, and transporting the cup) in order to accomplish some overarching goal (e.g., quenching thirst). Although several studies in humans have investigated the neural mechanisms supporting the planning of visually guided movements directed toward objects (such as reaching or pointing), only a handful have examined how manipulatory sequences of actions-those that occur after an object has been grasped-are planned and represented in the brain. Here, using event-related functional MRI and pattern decoding methods, we investigated the neural basis of real-object manipulation using a delayed-movement task in which participants first prepared and then executed different object-directed action sequences that varied either in their complexity or final spatial goals. Consistent with previous reports of preparatory brain activity in non-human primates, we found that activity patterns in several frontoparietal areas reliably predicted entire action sequences in advance of movement. Notably, we found that similar sequence-related information could also be decoded from pre-movement signals in object- and body-selective occipitotemporal cortex (OTC). These findings suggest that both frontoparietal and occipitotemporal circuits are engaged in transforming object-related information into complex, goal-directed movements

    Neural representation of geometry and surface properties in object and scene perception

    Get PDF
    Multiple cortical regions are crucial for perceiving the visual world, yet the processes shaping representations in these regions are unclear. To address this issue, we must elucidate how perceptual features shape representations of the environment. Here, we explore how the weighting of different visual features affects neural representations of objects and scenes, focusing on the scene-selective parahippocampal place area (PPA), but additionally including the retrosplenial complex (RSC), occipital place area (OPA), lateral occipital (LO) area, fusiform face area (FFA) and occipital face area (OFA). Across three experiments, we examined functional magnetic resonance imaging (fMRI) activity while human observers viewed scenes and objects that varied in geometry (shape/layout) and surface properties (texture/material). Interestingly, we found equal sensitivity in the PPA for these properties within a scene, revealing that spatial-selectivity alone does not drive activation within this cortical region. We also observed sensitivity to object texture in PPA, but not to the same degree as scene texture, and representations in PPA varied when objects were placed within scenes. We conclude that PPA may process surface properties in a domain-specific manner, and that the processing of scene texture and geometry is equally-weighted in PPA and may be mediated by similar underlying neuronal mechanisms

    Motor, not visual, encoding of potential reach targets

    Get PDF
    SummaryWe often encounter situations in which there are multiple potential targets for action, as when, for example, we hear the request “could you pass the 
” at the dinner table. It has recently been shown that, in such situations, activity in sensorimotor brain areas represents competing reach targets in parallel prior to deciding between, and then reaching towards, one of these targets [1]. One intriguing possibility, consistent with the influential notion of action ‘affordances’ [2], is that this activity reflects movement plans towards each potential target [3]. However, an equally plausible explanation is that this activity reflects an encoding of the visual properties of the potential targets (for example, their locations or directions), prior to any target being selected and the associated movement plan being formed. Notably, previous work showing spatial averaging behaviour during reaching, in which initial movements are biased towards the midpoint of the spatial distribution of potential targets [4–6], remains equally equivocal concerning the motor versus visual encoding of reach targets. Here, using a rapid reaching task that disentangles these two competing accounts, we show that reach averaging behaviour reflects the parallel encoding of multiple competing motor plans. This provides direct evidence for theories proposing that the brain prepares multiple available movements before selecting between them [3]

    Human decision making anticipates future performance in motor learning.

    Get PDF
    It is well-established that people can factor into account the distribution of their errors in motor performance so as to optimize reward. Here we asked whether, in the context of motor learning where errors decrease across trials, people take into account their future, improved performance so as to make optimal decisions to maximize reward. One group of participants performed a virtual throwing task in which, periodically, they were given the opportunity to select from a set of smaller targets of increasing value. A second group of participants performed a reaching task under a visuomotor rotation in which, after performing a initial set of trials, they selected a reward structure (ratio of points for target hits and misses) for different exploitation horizons (i.e., numbers of trials they might be asked to perform). Because movement errors decreased exponentially across trials in both learning tasks, optimal target selection (task 1) and optimal reward structure selection (task 2) required taking into account future performance. The results from both tasks indicate that people anticipate their future motor performance so as to make decisions that will improve their expected future reward

    Motor Planning Modulates Neural Activity Patterns in Early Human Auditory Cortex

    Get PDF
    It is well established that movement planning recruits motor-related cortical brain areas in preparation for the forthcoming action. Given that an integral component to the control of action is the processing of sensory information throughout movement, we predicted that movement planning might also modulate early sensory cortical areas, readying them for sensory processing during the unfolding action. To test this hypothesis, we performed 2 human functional magnetic resonance imaging studies involving separate delayed movement tasks and focused on premovement neural activity in early auditory cortex, given the area\u27s direct connections to the motor system and evidence that it is modulated by motor cortex during movement in rodents. We show that effector-specific information (i.e., movements of the left vs. right hand in Experiment 1 and movements of the hand vs. eye in Experiment 2) can be decoded, well before movement, from neural activity in early auditory cortex. We find that this motor-related information is encoded in a separate subregion of auditory cortex than sensory-related information and is present even when movements are cued visually instead of auditorily. These findings suggest that action planning, in addition to preparing the motor system for movement, involves selectively modulating primary sensory areas based on the intended action

    Parallel specification of competing sensorimotor control policies for alternative action options.

    Get PDF
    Recent theory proposes that the brain, when confronted with several action possibilities, prepares multiple competing movements before deciding among them. Psychophysical supporting evidence for this idea comes from the observation that when reaching towards multiple potential targets, the initial movement is directed towards the average location of the targets, consistent with multiple prepared reaches being executed simultaneously. However, reach planning involves far more than specifying movement direction; it requires the specification of a sensorimotor control policy that sets feedback gains shaping how the motor system responds to errors induced by noise or external perturbations. Here we found that, when a subject is reaching towards multiple potential targets, the feedback gain corresponds to an average of the gains specified when reaching to each target presented alone. Our findings provide evidence that the brain, when presented with multiple action options, computes multiple competing sensorimotor control policies in parallel before implementing one of them

    Counting on the motor system: Rapid action planning reveals the format- and magnitude-dependent extraction of numerical quantity

    Get PDF
    Symbolic numbers (e.g., 2 ) acquire their meaning by becoming linked to the core nonsymbolic quantities they represent (e.g., two items). However, the extent to which symbolic and nonsymbolic information converges onto the same internal core representations of quantity remains a point of considerable debate. As nearly all previous work on this topic has employed perceptual tasks requiring the conscious reporting of numerical magnitudes, here we question the extent to which numerical processing via the visual-motor system might shed further light on the fundamental basis of how different number formats are encoded.We show, using a rapid reaching task and a detailed analysis of initial arm trajectories, that there are key differences in how the quantity information extracted from symbolic Arabic numerals and nonsymbolic collections of discrete items are used to guide action planning. In particular, we found that the magnitude derived from discrete dots resulted in movements being biased by an amount directly proportional to the actual quantities presented whereasthe magnitude derived from numerals resulted in movements being biased only by the relative (e.g., larger than) quantities presented. In addition, we found that initial motor plans were more sensitive to changes in numerical quantity within small (1-3) than large (5-15) number ranges, irrespective of their format (dots or numerals). In light of previous work, our visual-motor results clearly show that the processing of numerical quantity information is both format and magnitude dependent. © 2014 ARVO

    Muting, not fragmentation, of functional brain networks under general anesthesia

    Get PDF
    © 2021 Changes in resting-state functional connectivity (rs-FC) under general anesthesia have been widely studied with the goal of identifying neural signatures of consciousness. This work has commonly revealed an apparent fragmentation of whole-brain network structure during unconsciousness, which has been interpreted as reflecting a break-down in connectivity and a disruption of the brain\u27s ability to integrate information. Here we show, by studying rs-FC under varying depths of isoflurane-induced anesthesia in nonhuman primates, that this apparent fragmentation, rather than reflecting an actual change in network structure, can be simply explained as the result of a global reduction in FC. Specifically, by comparing the actual FC data to surrogate data sets that we derived to test competing hypotheses of how FC changes as a function of dose, we found that increases in whole-brain modularity and the number of network communities – considered hallmarks of fragmentation – are artifacts of constructing FC networks by thresholding based on correlation magnitude. Taken together, our findings suggest that deepening levels of unconsciousness are instead associated with the increasingly muted expression of functional networks, an observation that constrains current interpretations as to how anesthesia-induced FC changes map onto existing neurobiological theories of consciousness

    Grip force when reaching with target uncertainty provides evidence for motor optimization over averaging.

    Get PDF
    When presented with competing potential reach targets and required to launch a movement before knowing which one will be cued as the target, people initially reach in the average target direction. Although this spatial averaging could arise from executing a weighted average of motor plans for the potential targets, it could also arise from planning a single, optimal movement. To test between these alternatives we used a task in which participants were required to reach to either a single target or towards two potential targets while grasping an object. A robotic device applied a lateral elastic load to the object requiring large grip forces for reaches to targets either side of midline and a minimal grip force for midline movements. As expected, in trials with two targets located either side of midline, participants initially reached straight ahead. Critically, on these trials the initial grip force was minimal, appropriate for the midline movement, and not the average of the large grip forces required for movements to the individual targets. These results indicate that under conditions of target uncertainty, people do not execute an average of planned actions but rather a single movement that optimizes motor costs
    corecore