143 research outputs found

    A sensorimotor network for actions and intentions reading: a series of TMS studies

    Get PDF
    Information relevant for our social life are immediately processed by our brain. When we walk in the street we easily and quite automatically adjust our path to avoid colliding other people. Several social activities like working in a group, playing a sport, talking with people and many others, all require the ability to carefully read others movements. Thus, kinematics and postural information of others‟ body are a fundamental medium for good survival in our social environment. Along the reading of this manuscript a series of extensive and novel studies will describe the role of sensorimotor cortices and their differential contribution in specific action observation tasks. By means of transcranial magnetic stimulation (TMS) we tested in healthy subjects both low and high cognitive level processes that may require areas of the action observation network

    Body Form Modulates the Prediction of Human and Artificial Behaviour from Gaze Observation

    Get PDF
    The future of human–robot collaboration relies on people’s ability to understand and predict robots' actions. The machine-like appearance of robots, as well as contextual information, may influence people’s ability to anticipate the behaviour of robots. We conducted six separate experiments to investigate how spatial cues and task instructions modulate people’s ability to understand what a robot is doing. Participants observed goal-directed and non-goal directed gaze shifts made by human and robot agents, as well as directional cues displayed by a triangle. We report that biasing an observer's attention, by showing just one object an agent can interact with, can improve people’s ability to understand what humanoid robots will do. Crucially, this cue had no impact on people’s ability to predict the upcoming behaviour of the triangle. Moreover, task instructions that focus on the visual and motor consequences of the observed gaze were found to influence mentalising abilities. We suggest that the human-like shape of an agent and its physical capabilities facilitate the prediction of an upcoming action. The reported findings expand current models of gaze perception and may have important implications for human–human and human–robot collaboration

    Freedom to act enhances the sense of agency, while movement and goal-related prediction errors reduce it

    Get PDF
    The Sense of Agency (SoA) is the experience of controlling one’s movements and their external consequences. Accumulating evidence suggests that freedom to act enhances SoA, while prediction errors are known to reduce it. Here, we investigated if prediction errors related to movement or to the achievement of the goal of the action exert the same influence on SoA during free and cued actions. Participants pressed a freely chosen or cued-colored button, while observing a virtual hand moving in the same or in the opposite direction—i.e., movement-related prediction error—and pressing the selected or a different color—i.e., goal-related prediction error. To investigate implicit and explicit components of SoA, we collected indirect (i.e., Synchrony Judgments) and direct (i.e., Judgments of Causation) measures. We found that participants judged virtual actions as more synchronous when they were free to act. Additionally, movement-related prediction errors reduced both perceived synchrony and judgments of causation, while goal-related prediction errors impaired exclusively the latter. Our results suggest that freedom to act enhances SoA and that movement and goal-related prediction errors lead to an equivalent reduction of SoA in free and cued actions. Our results also show that the influence of freedom to act and goal achievement may be limited, respectively, to implicit and explicit SoA, while movement information may affect both components. These findings provide support to recent theories that view SoA as a multifaceted construct, by showing that different action cues may uniquely influence the feeling of control

    Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot

    Get PDF
    Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user. © 2014 Tidoni, Gergondet, Kheddar and Aglioti

    Body visual discontinuity affects feeling of ownership and skin conductance responses

    Get PDF
    When we look at our hands we are immediately aware that they belong to us and we rarely doubt about the integrity, continuity and sense of ownership of our bodies. Here we explored whether the mere manipulation of the visual appearance of a virtual limb could influence the subjective feeling of ownership and the physiological responses (Skin Conductance Responses, SCRs) associated to a threatening stimulus approaching the virtual hand. Participants observed in first person perspective a virtual body having the right hand-forearm (i) connected by a normal wrist (Full-Limb) or a thin rigid wire connection (Wire) or (ii) disconnected because of a missing wrist (m-Wrist) or a missing wrist plus a plexiglass panel positioned between the hand and the forearm (Plexiglass). While the analysis of subjective ratings revealed that only the observation of natural full connected virtual limb elicited high levels of ownership, high amplitudes of SCRs were found also during observation of the non-natural, rigid wire connection condition. This result suggests that the conscious embodiment of an artificial limb requires a natural looking visual body appearance while implicit reactivity to threat may require physical body continuity, even non-naturally looking, that allows the implementation of protective reactions to threat

    Apparent biological motion in first and third person perspective

    Get PDF
    Apparent biological motion is the perception of plausible movements when two alternating images depicting the initial and final phase of an action are presented at specific stimulus onset asynchronies. Here, we show lower subjective apparent biological motion perception when actions are observed from a first relative to a third visual perspective. These findings are discussed within the context of sensorimotor contributions to body ownership

    And yet they act together: Interpersonal perception modulates visuo-motor interference and mutual adjustments during a joint-grasping task

    Get PDF
    Prediction of “when” a partner will act and “what” he is going to do is crucial in joint-action contexts. However, studies on face-to-face interactions in which two people have to mutually adjust their movements in time and space are lacking. Moreover, while studies on passive observation have shown that somato-motor simulative processes are disrupted when the observed actor is perceived as an out-group or unfair individual, the impact of interpersonal perception on joint-actions has never been directly addressed. Here we explored this issue by comparing the ability of pairs of participants who did or did not undergo an interpersonal perception manipulation procedure to synchronise their reach-to-grasp movements during: i) a guided interaction, requiring pure temporal reciprocal coordination, and ii) a free interaction, requiring both time and space adjustments. Behavioural results demonstrate that while in neutral situations free and guided interactions are equally challenging for participants, a negative interpersonal relationship improves performance in guided interactions at the expense of the free interactive ones. This was paralleled at the kinematic level by the absence of movement corrections and by low movement variability in these participants, indicating that partners cooperating within a negative interpersonal bond executed the cooperative task on their own, without reciprocally adapting to the partner's motor behaviour. Crucially, participants' performance in the free interaction improved in the manipulated group during the second experimental session while partners became interdependent as suggested by higher movement variability and by the appearance of interference between the self-executed actions and those observed in the partner. Our study expands current knowledge about on-line motor interactions by showing that visuo-motor interference effects, mutual motor adjustments and motor-learning mechanisms are influenced by social perception

    Primary somatosensory cortex necessary for the perception of weight from other people's action: a continuous theta-burst TMS experiment

    Get PDF
    The presence of a network of areas in the parietal and premotor cortices, which are active both during action execution and observation, suggests that we might understand the actions of other people by activating those motor programs for making similar actions. Although neurophysiological and imaging studies show an involvement of the somatosensory cortex (SI) during action observation and execution, it is unclear whether SI is essential for understanding the somatosensory aspects of observed actions. To address this issue, we used off-line transcranial magnetic continuous theta-burst stimulation (cTBS) just before a weight judgment task. Participants observed the right hand of an actor lifting a box and estimated its relative weight. In counterbalanced sessions, we delivered sham and active cTBS over the hand region of the left SI and, to test anatomical specificity, over the left motor cortex (M1) and the left superior parietal lobule (SPL). Active cTBS over SI, but not over M1 or SPL, impaired task performance relative to sham cTBS. Moreover, active cTBS delivered over SI just before participants were asked to evaluate the weight of a bouncing ball did not alter performance compared to sham cTBS. These findings indicate that SI is critical for extracting somatosensory features (heavy/light) from observed action kinematics and suggest a prominent role of SI in action understanding
    corecore