102 research outputs found

    Modeling the Development of Goal-Specificity in Mirror Neurons

    Get PDF
    Neurophysiological studies have shown that parietal mirror neurons encode not only actions but also the goal of these actions. Although some mirror neurons will fire whenever a certain action is perceived (goal-independently), most will only fire if the motion is perceived as part of an action with a specific goal. This result is important for the action-understanding hypothesis as it provides a potential neurological basis for such a cognitive ability. It is also relevant for the design of artificial cognitive systems, in particular robotic systems that rely on computational models of the mirror system in their interaction with other agents. Yet, to date, no computational model has explicitly addressed the mechanisms that give rise to both goal-specific and goal-independent parietal mirror neurons. In the present paper, we present a computational model based on a self-organizing map, which receives artificial inputs representing information about both the observed or executed actions and the context in which they were executed. We show that the map develops a biologically plausible organization in which goal-specific mirror neurons emerge. We further show that the fundamental cause for both the appearance and the number of goal-specific neurons can be found in geometric relationships between the different inputs to the map. The results are important to the action-understanding hypothesis as they provide a mechanism for the emergence of goal-specific parietal mirror neurons and lead to a number of predictions: (1) Learning of new goals may mostly reassign existing goal-specific neurons rather than recruit new ones; (2) input differences between executed and observed actions can explain observed corresponding differences in the number of goal-specific neurons; and (3) the percentage of goal-specific neurons may differ between motion primitives

    Embodied Gesture Processing: Motor-Based Integration of Perception and Action in Social Artificial Agents

    Get PDF
    A close coupling of perception and action processes is assumed to play an important role in basic capabilities of social interaction, such as guiding attention and observation of others’ behavior, coordinating the form and functions of behavior, or grounding the understanding of others’ behavior in one’s own experiences. In the attempt to endow artificial embodied agents with similar abilities, we present a probabilistic model for the integration of perception and generation of hand-arm gestures via a hierarchy of shared motor representations, allowing for combined bottom-up and top-down processing. Results from human-agent interactions are reported demonstrating the model’s performance in learning, observation, imitation, and generation of gestures

    Behavioural and neuroanatomical correlates of auditory speech analysis in primary progressive aphasias

    Get PDF
    Background Non-verbal auditory impairment is increasingly recognised in the primary progressive aphasias (PPAs) but its relationship to speech processing and brain substrates has not been defined. Here we addressed these issues in patients representing the non-fluent variant (nfvPPA) and semantic variant (svPPA) syndromes of PPA. Methods We studied 19 patients with PPA in relation to 19 healthy older individuals. We manipulated three key auditory parameters—temporal regularity, phonemic spectral structure and prosodic predictability (an index of fundamental information content, or entropy)—in sequences of spoken syllables. The ability of participants to process these parameters was assessed using two-alternative, forced-choice tasks and neuroanatomical associations of task performance were assessed using voxel-based morphometry of patients’ brain magnetic resonance images. Results Relative to healthy controls, both the nfvPPA and svPPA groups had impaired processing of phonemic spectral structure and signal predictability while the nfvPPA group additionally had impaired processing of temporal regularity in speech signals. Task performance correlated with standard disease severity and neurolinguistic measures. Across the patient cohort, performance on the temporal regularity task was associated with grey matter in the left supplementary motor area and right caudate, performance on the phoneme processing task was associated with grey matter in the left supramarginal gyrus, and performance on the prosodic predictability task was associated with grey matter in the right putamen. Conclusions Our findings suggest that PPA syndromes may be underpinned by more generic deficits of auditory signal analysis, with a distributed cortico-subcortical neuraoanatomical substrate extending beyond the canonical language network. This has implications for syndrome classification and biomarker development

    Working Together May Be Better: Activation of Reward Centers during a Cooperative Maze Task

    Get PDF
    Humans use theory of mind when predicting the thoughts and feelings and actions of others. There is accumulating evidence that cooperation with a computerized game correlates with a unique pattern of brain activation. To investigate the neural correlates of cooperation in real-time we conducted an fMRI hyperscanning study. We hypothesized that real-time cooperation to complete a maze task, using a blind-driving paradigm, would activate substrates implicated in theory of mind. We also hypothesized that cooperation would activate neural reward centers more than when participants completed the maze themselves. Of interest and in support of our hypothesis we found left caudate and putamen activation when participants worked together to complete the maze. This suggests that cooperation during task completion is inherently rewarding. This finding represents one of the first discoveries of a proximate neural mechanism for group based interactions in real-time, which indirectly supports the social brain hypothesis

    A dual-fMRI investigation of the iterated Ultimatum Game reveals that reciprocal behaviour is associated with neural alignment

    Get PDF
    Dyadic interactions often involve a dynamic process of mutual reciprocity; to steer a series of exchanges towards a desired outcome, both interactants must adapt their own behaviour according to that of their interaction partner. Understanding the brain processes behind such bidirectional reciprocity is therefore central to social neuroscience, but this requires measurement of both individuals’ brains during realworld exchanges. We achieved this by performing functional magnetic resonance imaging (fMRI) on pairs of male individuals simultaneously while they interacted in a modifed iterated Ultimatum Game (iUG). In this modifcation, both players could express their intent and maximise their own monetary gain by reciprocating their partner’s behaviour – they could promote generosity through cooperation and/or discourage unfair play with retaliation. By developing a novel model of reciprocity adapted from behavioural economics, we then show that each player’s choices can be predicted accurately by estimating expected utility (EU) not only in terms of immediate payof, but also as a reaction to their opponent’s prior behaviour. Finally, for the frst time we reveal that brain signals implicated in social decision making are modulated by these estimates of EU, and become correlated more strongly between interacting players who reciprocate one another

    The Role of Motor Learning in Spatial Adaptation near a Tool

    Get PDF
    Some visual-tactile (bimodal) cells have visual receptive fields (vRFs) that overlap and extend moderately beyond the skin of the hand. Neurophysiological evidence suggests, however, that a vRF will grow to encompass a hand-held tool following active tool use but not after passive holding. Why does active tool use, and not passive holding, lead to spatial adaptation near a tool? We asked whether spatial adaptation could be the result of motor or visual experience with the tool, and we distinguished between these alternatives by isolating motor from visual experience with the tool. Participants learned to use a novel, weighted tool. The active training group received both motor and visual experience with the tool, the passive training group received visual experience with the tool, but no motor experience, and finally, a no-training control group received neither visual nor motor experience using the tool. After training, we used a cueing paradigm to measure how quickly participants detected targets, varying whether the tool was placed near or far from the target display. Only the active training group detected targets more quickly when the tool was placed near, rather than far, from the target display. This effect of tool location was not present for either the passive-training or control groups. These results suggest that motor learning influences how visual space around the tool is represented

    On interference effects in concurrent perception and action

    Get PDF
    Recent studies have reported repulsion effects between the perception of visual motion and the concurrent production of hand movements. Two models, based on the notions of common coding and internal forward modeling, have been proposed to account for these phenomena. They predict that the size of the effects in perception and action should be monotonically related and vary with the amount of similarity between what is produced and perceived. These predictions were tested in four experiments in which participants were asked to make hand movements in certain directions while simultaneously encoding the direction of an independent stimulus motion. As expected, perceived directions were repelled by produced directions, and produced directions were repelled by perceived directions. However, contrary to the models, the size of the effects in perception and action did not covary, nor did they depend (as predicted) on the amount of perception–action similarity. We propose that such interactions are mediated by the activation of categorical representations

    Bayesian Cue Integration as a Developmental Outcome of Reward Mediated Learning

    Get PDF
    Average human behavior in cue combination tasks is well predicted by Bayesian inference models. As this capability is acquired over developmental timescales, the question arises, how it is learned. Here we investigated whether reward dependent learning, that is well established at the computational, behavioral, and neuronal levels, could contribute to this development. It is shown that a model free reinforcement learning algorithm can indeed learn to do cue integration, i.e. weight uncertain cues according to their respective reliabilities and even do so if reliabilities are changing. We also consider the case of causal inference where multimodal signals can originate from one or multiple separate objects and should not always be integrated. In this case, the learner is shown to develop a behavior that is closest to Bayesian model averaging. We conclude that reward mediated learning could be a driving force for the development of cue integration and causal inference

    A Single-Rate Context-Dependent Learning Process Underlies Rapid Adaptation to Familiar Object Dynamics

    Get PDF
    Motor learning has been extensively studied using dynamic (force-field) perturbations. These induce movement errors that result in adaptive changes to the motor commands. Several state-space models have been developed to explain how trial-by-trial errors drive the progressive adaptation observed in such studies. These models have been applied to adaptation involving novel dynamics, which typically occurs over tens to hundreds of trials, and which appears to be mediated by a dual-rate adaptation process. In contrast, when manipulating objects with familiar dynamics, subjects adapt rapidly within a few trials. Here, we apply state-space models to familiar dynamics, asking whether adaptation is mediated by a single-rate or dual-rate process. Previously, we reported a task in which subjects rotate an object with known dynamics. By presenting the object at different visual orientations, adaptation was shown to be context-specific, with limited generalization to novel orientations. Here we show that a multiple-context state-space model, with a generalization function tuned to visual object orientation, can reproduce the time-course of adaptation and de-adaptation as well as the observed context-dependent behavior. In contrast to the dual-rate process associated with novel dynamics, we show that a single-rate process mediates adaptation to familiar object dynamics. The model predicts that during exposure to the object across multiple orientations, there will be a degree of independence for adaptation and de-adaptation within each context, and that the states associated with all contexts will slowly de-adapt during exposure in one particular context. We confirm these predictions in two new experiments. Results of the current study thus highlight similarities and differences in the processes engaged during exposure to novel versus familiar dynamics. In both cases, adaptation is mediated by multiple context-specific representations. In the case of familiar object dynamics, however, the representations can be engaged based on visual context, and are updated by a single-rate process
    corecore