847 research outputs found

    Complementary actions

    Get PDF
    Complementary colors are color pairs which, when combined in the right proportions, produce white or black. Complementary actions refer here to forms of social interaction wherein individuals adapt their joint actions according to a common aim. Notably, complementary actions are incongruent actions. But being incongruent is not sufficient to be complementary (i.e., to complete the action of another person). Successful complementary interactions are founded on the abilities: (i) to simulate another person's movements, (ii) to predict another person's future action/s, (iii) to produce an appropriate incongruent response which differ, while interacting, with observed ones, and (iv) to complete the social interaction by integrating the predicted effects of one's own action with those of another person. This definition clearly alludes to the functional importance of complementary actions in the perception-action cycle and prompts us to scrutinize what is taking place behind the scenes. Preliminary data on this topic have been provided by recent cutting-edge studies utilizing different research methods. This mini-review aims to provide an up-to-date overview of the processes and the specific activations underlying complementary actions

    Complementary Actions

    Get PDF
    Human beings come into the world wired for social interaction. At the fourteenth week of gestation, twin fetuses already display interactive movements specifically directed towards their co- twin. Readiness for social interaction is also clearly expressed by the newborn who imitate facial gestures, suggesting that there is a common representation mediating action observation and execution. While actions that are observed and those that are planned seem to be functionally equivalent, it is unclear if the visual representation of an observed action inevitably leads to its motor representation. This is particularly true with regard to complementary actions (from the Latin complementum ; i.e. that fills up), a specific class of movements which differ, while interacting, with observed ones. In geometry, angles are defined as complementary if they form a right angle. In art and design, complementary colors are color pairs that, when combined in the right proportions, produce white or black. As a working definition, complementary actions refer here to any form of social interaction wherein two (or more) individuals complete each other\u2019s actions in a balanced way. Successful complementary interactions are founded on the abilities:\ua0 (1)\ua0 to simulate another person\u2019s movements; (2)\ua0 to predict another person\u2019s future action/ s; (3)\ua0to produce an appropriate congruent/ incongruent response that completes the other person\u2019s action/ s; and (4)\ua0to integrate the predicted effects of one\u2019s own and another person\u2019s actions. It is the neurophysiological mechanism that underlies this process which forms the main theme of this chapte

    Translating novel findings of perceptual-motor codes into the neuro-rehabilitation of movement disorders

    Get PDF
    The bidirectional flow of perceptual and motor information has recently proven useful as rehabilitative tool for re-building motor memories. We analyzed how the visual-motor approach has been successfully applied in neurorehabilitation, leading to surprisingly rapid and effective improvements in action execution. We proposed that the contribution of multiple sensory channels during treatment enables individuals to predict and optimize motor behavior, having a greater effect than visual input alone. We explored how the state-of-the-art neuroscience techniques show direct evidence that employment of visual-motor approach leads to increased motor cortex excitability and synaptic and cortical map plasticity. This super-additive response to multimodal stimulation may maximize neural plasticity, potentiating the effect of conventional treatment, and will be a valuable approach when it comes to advances in innovative methodologies

    Action Observation and Effector Independency

    Get PDF
    open4siThe finding of reasonably consistent spatial and temporal productions of actions across different body parts has been used to argue in favor of the existence of a high-order representation of motor programs. In these terms, a generalized motor program consists of an abstract memory structure apt to specify a class of non-specific instructions used to guide a broad range of movements (e.g., “grasp,” “bite”). Although a number of studies, using a variety of tasks, have assessed the issue of effector independence in terms of action execution, little is known regarding the issue of effector independence within an action observation context. Here corticospinal excitability (CSE) of the right hand’s first dorsal interosseous (FDI) and abductor digiti minimi (ADM) muscles was assessed by means of single-pulse transcranial magnetic stimulation (spTMS) during observation of a grasping action performed by the hand, the foot, the mouth, the elbow, or the knee. The results indicate that observing a grasping action performed with different body parts activates the effector typically adopted to execute that action, i.e., the hand. We contend that, as far as grasping is concerned, motor activations by action observation are evident in the muscles typically used to perform the observed action, even when the action is executed with another effector. Nevertheless, some exceptions call for a deeper analysis of motor coding.openBetti, Sonia; Deceuninck, Marie; Sartori, Luisa; Castiello, UmbertoBetti, Sonia; Deceuninck, Marie; Sartori, Luisa; Castiello, Umbert

    Neural basis of motor planning for object-oriented actions: the role of kinematics and cognitive aspects

    Get PDF
    The project I have carried out in these three years as PhD student pursued the aim of describing the motor preparation activity related to the object oriented actions actually performed. The importance of these studies comes from the lack of literature on EEG and complex movements actually executed and not just mimed or pantomimed. Using the term ‘complex’ here we refer to actions that are oriented to an object with the intent to interact with it. In order to provide a broader idea about the aim of the project, I have illustrated the complexity of the movements and cortical networks involved in their processing and execution. Several cortical areas concur to the plan and execution of a movement, and the contribution of these different areas changes according to the complexity, in terms of kinematics, of the action. The object-oriented action seems to be a circuit apart: besides motor structures, it also involves a temporo-parietal network that takes part to both planning and performing actions like reaching and grasping. Such findings have been pointed out starting from studies on the Mirror neuron system discovered in monkeys at the beginning of the ‘90s and subsequently extended to humans. Apart from all the speculations this discovery has opened to, many different researchers have started investigating different aspects related to reaching and grasping movements, describing different areas involved, all belonging to the posterior parietal cortex (PPC), and their connections with anterior motor cortices through different paradigms and techniques. Most of the studies investigating movement execution and preparation are studies on monkey or fMRI studies on humans. Limits of this technique come from its low temporal resolution and the impossibility to use self-paced movement, that is, movement performed in more ecological conditions when the subject decides freely to move. On the few studies investigating motor preparation using EEG, only pantomime of action has been used, more than real interactions with objects. Because all of these factors, we decided to get through the description of the motor preparation activity for goal oriented actions pursuing two aims: in the first instance, to describe this activity for grasping and reaching actions actually performed toward a cup (a very ecological object); secondly, we wanted to verify which parameters in these kind of movements are taken into account during their planning and preparation: because of all the variables involved in grasping and reaching movements, like the position of the objects, its features, the goal of the action and its meaning, we tried to verify how these variables could affect motor preparation creating two different experiments. In the first one, subjects were requested to perform a grasping and a reaching action toward a cup and in a third condition we tied up their hands as fist in order to verify what it could happen when people are in the condition of turning an ordinary and easy action into a new one to accomplish the final task requested. In the second experiment, we better accounted for the cognitive aspects beyond the motor preparation of an action. Here, indeed, we tested a very simple action like a key press in two different conditions. In the first one the button press was not related to any kind of consequence, whereas in the second case the same action triggered a video on a screen showing a hand moving toward a cup and grasping it (giving like a video-game effect). Both the experiments have shown results straightening the role cognitive processes have in motor planning. In particular, it seemed that the goal of the action, along with the object we are going to interact with, could create a particular response and activity starting very early in the posterior parietal cortex. Finally, because of the actions used in these experiments, it was important testing the hypothesis that our findings could be generalized even to the observation of those same actions. As I mentioned before, object-oriented actions have received great attention starting from the discovery of the mirror neuron system which showed a correspondence between the cortical activity of the person performing the action with the one produced in the observer. Such a finding allowed to describe our brain as a social brain, able to create a mental representation of what the other person is doing which allows us to understand others gesture and intentions. What we wanted to test in this project was the possibility that such a correspondence between the observer and the actor would had been extended even to the motor preparation period of an upcoming action, giving credit to the hypothesis of considering the human brain as able to even predict others actions and intentions besides understanding them. In the last experiment I carried out in my project, thus, I used the same actions involved in the first experiment but asking this time to observe them passively instead of performing them. The results provided in this study confirmed the cognitive, rather than motor, role the PPC plays in action planning. Indeed, even when no movements are involved, the same structure are active reflecting the activity found in the execution experiment. The main result I have reported in this dissertation is related to the suggestion of a new model to understand the role the PPC has in object-oriented movements. Unlike previous hypothesis and models suggesting the contribution of PPC in extracting affordances from the objects or monitoring and transforming coordinates between us and the object into intention for acting, we suggest here that the role of the parietal areas is more to make a judge about the appropriate match of the action goal with the affordances provided by the object. When actually the action we are going to perform fits well with the object features, the PPC starts its activity, elaborating all those coordinates representation and monitoring the execution and programming phases of movement. This model is well supported by results from both our experiments and well combines the two previous models, but putting more emphasis on the ‘goal-object matching’ function of the PPC and the Superior parietal lobe (SPL) in particular

    Neural networks for action representation: a functional magnetic-resonance imaging and dynamic causal modeling study

    Get PDF
    Automatic mimicry is based on the tight linkage between motor and perception action representations in which internal models play a key role. Based on the anatomical connection, we hypothesized that the direct effective connectivity from the posterior superior temporal sulcus (pSTS) to the ventral premotor area (PMv) formed an inverse internal model, converting visual representation into a motor plan, and that reverse connectivity formed a forward internal model, converting the motor plan into a sensory outcome of action. To test this hypothesis, we employed dynamic causal-modeling analysis with functional magnetic-resonance imaging (fMRI). Twenty-four normal participants underwent a change-detection task involving two visually-presented balls that were either manually rotated by the investigator's right hand (“Hand”) or automatically rotated. The effective connectivity from the pSTS to the PMv was enhanced by hand observation and suppressed by execution, corresponding to the inverse model. Opposite effects were observed from the PMv to the pSTS, suggesting the forward model. Additionally, both execution and hand observation commonly enhanced the effective connectivity from the pSTS to the inferior parietal lobule (IPL), the IPL to the primary sensorimotor cortex (S/M1), the PMv to the IPL, and the PMv to the S/M1. Representation of the hand action therefore was implemented in the motor system including the S/M1. During hand observation, effective connectivity toward the pSTS was suppressed whereas that toward the PMv and S/M1 was enhanced. Thus, the action-representation network acted as a dynamic feedback-control system during action observation

    Decoding the neural mechanisms of human tool use.

    Get PDF
    Sophisticated tool use is a defining characteristic of the primate species but how is it supported by the brain, particularly the human brain? Here we show, using functional MRI and pattern classification methods, that tool use is subserved by multiple distributed action-centred neural representations that are both shared with and distinct from those of the hand. In areas of frontoparietal cortex we found a common representation for planned hand- and tool-related actions. In contrast, in parietal and occipitotemporal regions implicated in hand actions and body perception we found that coding remained selectively linked to upcoming actions of the hand whereas in parietal and occipitotemporal regions implicated in tool-related processing the coding remained selectively linked to upcoming actions of the tool. The highly specialized and hierarchical nature of this coding suggests that hand- and tool-related actions are represented separately at earlier levels of sensorimotor processing before becoming integrated in frontoparietal cortex. DOI:http://dx.doi.org/10.7554/eLife.00425.001

    Activity in ventral premotor cortex is modulated by vision of own hand in action

    Get PDF
    Parietal and premotor cortices of the macaque monkey contain distinct populations of neurons which, in addition to their motor discharge, are also activated by visual stimulation. Among these visuomotor neurons, a population of grasping neurons located in the anterior intraparietal area (AIP) shows discharge modulation when the own hand is visible during object grasping. Given the dense connections between AIP and inferior frontal regions, we aimed at investigating whether two hand-related frontal areas, ventral premotor area F5 and primary motor cortex (area F1), contain neurons with similar properties. Two macaques were involved in a grasping task executed in various light/dark conditions in which the to-be-grasped object was kept visible by a dim retro-illumination. Approximately 62% of F5 and 55% of F1 motor neurons showed light/dark modulations. To better isolate the effect of hand-related visual input, we introduced two further conditions characterized by kinematic features similar to the dark condition. The scene was briefly illuminated (i) during hand preshaping (pre-touch flash, PT-flash) and (ii) at hand-object contact (touch flash, T-flash). Approximately 48% of F5 and 44% of F1 motor neurons showed a flash-related modulation. Considering flash-modulated neurons in the two flash conditions, ∌40% from F5 and ∌52% from F1 showed stronger activity in PT- than T-flash (PT-flash-dominant), whereas ∌60% from F5 and ∌48% from F1 showed stronger activity in T- than PT-flash (T-flash-dominant). Furthermore, F5, but not F1, flash-dominant neurons were characterized by a higher peak and mean discharge in the preferred flash condition as compared to light and dark conditions. Still considering F5, the distribution of the time of peak discharge was similar in light and preferred flash conditions. This study shows that the frontal cortex contains neurons, previously classified as motor neurons, which are sensitive to the observation of meaningful phases of the own grasping action. We conclude by discussing the possible functional role of these populations

    How the brain grasps tools: fMRI & motion-capture investigations

    Get PDF
    Humans’ ability to learn about and use tools is considered a defining feature of our species, with most related neuroimaging investigations involving proxy 2D picture viewing tasks. Using a novel tool grasping paradigm across three experiments, participants grasped 3D-printed tools (e.g., a knife) in ways that were considered to be typical (i.e., by the handle) or atypical (i.e., by the blade) for subsequent use. As a control, participants also performed grasps in corresponding directions on a series of 3D-printed non-tool objects, matched for properties including elongation and object size. Project 1 paired a powerful fMRI block-design with visual localiser Region of Interest (ROI) and searchlight Multivoxel Pattern Analysis (MVPA) approaches. Most remarkably, ROI MVPA revealed that hand-selective, but not anatomically overlapping tool-selective, areas of the left Lateral Occipital Temporal Cortex and Intraparietal Sulcus represented the typicality of tool grasping. Searchlight MVPA found similar evidence within left anterior temporal cortex as well as right parietal and temporal areas. Project 2 measured hand kinematics using motion-capture during a highly similar procedure, finding hallmark grip scaling effects despite the unnatural task demands. Further, slower movements were observed when grasping tools, relative to non-tools, with grip scaling also being poorer for atypical tool, compared to non-tool, grasping. Project 3 used a slow-event related fMRI design to investigate whether representations of typicality were detectable during motor planning, but MVPA was largely unsuccessful, presumably due to a lack of statistical power. Taken together, the representations of typicality identified within areas of the ventral and dorsal, but not ventro-dorsal, pathways have implications for specific predictions made by leading theories about the neural regions supporting human tool-use, including dual visual stream theory and the two-action systems model
    • 

    corecore