219 research outputs found
Response Coordination Emerges in Cooperative but Not Competitive Joint Task
Effective social interactions rely on humans’ ability to attune to others within social contexts. Recently, it has been proposed that the emergence of shared representations, as indexed by the Joint Simon effect (JSE), might result from interpersonal coordination (Malone et al., 2014). The present study aimed at examining interpersonal coordination in cooperative and competitive joint tasks. To this end, in two experiments we investigated response coordination, as reflected in instantaneous cross-correlation, when co-agents cooperate (Experiment 1) or compete against each other (Experiment 2). In both experiments, participants performed a go/no-go Simon task alone and together with another agent in two consecutive sessions. In line with previous studies, we found that social presence differently affected the JSE under cooperative and competitive instructions. Similarly, cooperation and competition were reflected in co-agents response coordination. For the cooperative session (Experiment 1), results showed higher percentage of interpersonal coordination for the joint condition, relative to when participants performed the task alone. No difference in the coordination of responses occurred between the individual and the joint conditions when co-agents were in competition (Experiment 2). Finally, results showed that interpersonal coordination between co-agents implies the emergence of the JSE. Taken together, our results suggest that shared representations seem to be a necessary, but not sufficient, condition for interpersonal coordination
Action intentions modulate allocation of visual attention: electrophysiological evidence
In line with the Theory of Event Coding (Hommel et al., 2001), action planning has been shown to affect perceptual processing - an effect that has been attributed to a so-called intentional weighting mechanism (Wykowska et al., 2009; Hommel, 2010). This paper investigates the electrophysiological correlates of action-related modulations of selection mechanisms in visual perception. A paradigm combining a visual search task for size and luminance targets with a movement task (grasping or pointing) was introduced, and the EEG was recorded while participants were performing the tasks. The results showed that the behavioral congruency effects, i.e., better performance in congruent (relative to incongruent) action-perception trials have been reflected by a modulation of the P1 component as well as the N2pc (an ERP marker of spatial attention). These results support the argumentation that action planning modulates already early perceptual processing and attention mechanisms
Imaging when acting: picture but not word cues induce action-related biases of visual attention
In line with the Theory of Event Coding (Hommel et al., 2001a), action planning has been shown to affect perceptual processing an effect that has been attributed to a so-called intentional weighting mechanism (Wykowska et al., 2009; Memelink and Hommel, 2012), whose functional role is to provide information for open parameters of online action adjustment (Hommel, 2010). The aim of this study was to test whether different types of action representations induce intentional weighting to various degrees. To meet this aim, we introduced a paradigm in which participants performed a visual search task while preparing to grasp or to point. The to-be performed movement was signaled either by a picture of a required action or a word cue. We reasoned that picture cues might trigger a more concrete action representation that would be more likely to activate the intentional weighting of perceptual dimensions that provide information for online action control. In contrast, word cues were expected to trigger a more abstract action representation that would be less likely to induce intentional weighting. In two experiments, preparing for an action facilitated the processing of targets in an unrelated search task if they differed from distractors on a dimension that provided information for online action control. As predicted, however, this effect was observed only if action preparation was signaled by picture cues but not if it was signaled by word cues. We conclude that picture cues are more efficient than word cues in activating the intentional weighting of perceptual dimensions, presumably by specifying not only invariant characteristics of the planned action but also the dimensions of action-specific parameters
Cultural Values, but not Nationality, Predict Social Inclusion of Robots
Research highlighted that Western and Eastern cultures differ in socio-cognitive mechanisms, such as social inclusion. Interestingly, social inclusion is a phe-nomenon that might transfer from human-human to human-robot relationships. Although the literature has shown that individual attitudes towards robots are shaped by cultural background, little research has investigated the role of cul-tural differences in the social inclusion of robots. In the present experiment, we investigated how cultural differences, in terms of nationality and individual cul-tural stance, influence social inclusion of the humanoid robot iCub, in a modi-fied version of the Cyberball game, a classical experimental paradigm measur-ing social ostracism and exclusion mechanisms. Moreover, we investigated whether the individual tendency to attribute intentionality towards robots mod-ulates the degree of inclusion of the iCub robot during the Cyberball game. Re-sults suggested that the individuals’ stance towards collectivism and tendency to attribute a mind to robots both predicted the level of social inclusion of the iCub robot in our version of the Cyberball game
Social inclusion of robots depends on the way a robot is presented to observers
Abstract
Research has shown that people evaluate others according to specific categories. As this phenomenon seems to transfer from human–human to human–robot interactions, in the present study we focused on (1) the degree of prior knowledge about technology, in terms of theoretical background and technical education, and (2) intentionality attribution toward robots, as factors potentially modulating individuals' tendency to perceive robots as social partners. Thus, we designed a study where we asked two samples of participants varying in their prior knowledge about technology to perform a ball-tossing game, before and after watching a video where the humanoid iCub robot was depicted either as an artificial system or as an intentional agent. Results showed that people were more prone to socially include the robot after observing iCub presented as an artificial system, regardless of their degree of prior knowledge about technology. Therefore, we suggest that the way the robot was presented, and not the prior knowledge about technology, is likely to modulate individuals' tendency to perceive the robot as a social partner
ERP markers of action planning and outcome monitoring in human – robot interaction
The present study aimed to examine event-related potentials (ERPs) of action planning and outcome monitoring in human-robot interaction. To this end, participants were instructed to perform costly actions (i.e. losing points) to stop a balloon from inflating and to prevent its explosion. They performed the task alone (individual condition) or with a robot (joint condition). Similar to findings from human-human interactions, results showed that action planning was affected by the presence of another agent, robot in this case. Specifically, the early readiness potential (eRP) amplitude was larger in the joint, than in the individual, condition. The presence of the robot affected also outcome perception and monitoring. Our results showed that the P1/N1 complex was suppressed in the joint, compared to the individual condition when the worst outcome was expected, suggesting that the presence of the robot affects attention allocation to negative outcomes of one's own actions. Similarly, results also showed that larger losses elicited smaller feedback-related negativity (FRN) in the joint than in the individual condition. Taken together, our results indicate that the social presence of a robot may influence the way we plan our actions and also the way we monitor their consequences. Implications of the study for the human-robot interaction field are discussed
I see what you mean
The ability to understand and predict others' behavior is essential for successful interactions. When making predictions about what other humans will do, we treat them as intentional systems and adopt the intentional stance, i.e., refer to their mental states such as desires and intentions. In the present experiments, we investigated whether the mere belief that the observed agent is an intentional system influences basic social attention mechanisms. We presented pictures of a human and a robot face in a gaze cuing paradigm and manipulated the likelihood of adopting the intentional stance by instruction: in some conditions, participants were told that they were observing a human or a robot, in others, that they were observing a human-like mannequin or a robot whose eyes were controlled by a human. In conditions in which participants were made to believe they were observing human behavior (intentional stance likely) gaze cuing effects were significantly larger as compared to conditions when adopting the intentional stance was less likely. This effect was independent of whether a human or a robot face was presented. Therefore, we conclude that adopting the intentional stance when observing others' behavior fundamentally influences basic mechanisms of social attention. The present results provide striking evidence that high-level cognitive processes, such as beliefs, modulate bottom-up mechanisms of attentional selection in a top-down manner
From social brains to social robots: applying neurocognitive insights to human-robot interaction
Amidst the fourth industrial revolution, social robots are resolutely moving from fiction to reality. With sophisticated artificial agents becoming ever more ubiquitous in daily life, researchers across different fields are grappling with the questions concerning how humans perceive and interact with these agents and the extent to which the human brain incorporates intelligent machines into our social milieu. This theme issue surveys and discusses the latest findings, current challenges and future directions in neuroscience- and psychology-inspired human–robot interaction (HRI). Critical questions are explored from a transdisciplinary perspective centred around four core topics in HRI: technical solutions for HRI, development and learning for HRI, robots as a tool to study social cognition, and moral and ethical implications of HRI. Integrating findings from diverse but complementary research fields, including social and cognitive neurosciences, psychology, artificial intelligence and robotics, the contributions showcase ways in which research from disciplines spanning biological sciences, social sciences and technology deepen our understanding of the potential and limits of robotic agents in human social life
- …