2,532 research outputs found

    EEG theta and Mu oscillations during perception of human and robot actions.

    Get PDF
    The perception of others' actions supports important skills such as communication, intention understanding, and empathy. Are mechanisms of action processing in the human brain specifically tuned to process biological agents? Humanoid robots can perform recognizable actions, but can look and move differently from humans, and as such, can be used in experiments to address such questions. Here, we recorded EEG as participants viewed actions performed by three agents. In the Human condition, the agent had biological appearance and motion. The other two conditions featured a state-of-the-art robot in two different appearances: Android, which had biological appearance but mechanical motion, and Robot, which had mechanical appearance and motion. We explored whether sensorimotor mu (8-13 Hz) and frontal theta (4-8 Hz) activity exhibited selectivity for biological entities, in particular for whether the visual appearance and/or the motion of the observed agent was biological. Sensorimotor mu suppression has been linked to the motor simulation aspect of action processing (and the human mirror neuron system, MNS), and frontal theta to semantic and memory-related aspects. For all three agents, action observation induced significant attenuation in the power of mu oscillations, with no difference between agents. Thus, mu suppression, considered an index of MNS activity, does not appear to be selective for biological agents. Observation of the Robot resulted in greater frontal theta activity compared to the Android and the Human, whereas the latter two did not differ from each other. Frontal theta thus appears to be sensitive to visual appearance, suggesting agents that are not sufficiently biological in appearance may result in greater memory processing demands for the observer. Studies combining robotics and neuroscience such as this one can allow us to explore neural basis of action processing on the one hand, and inform the design of social robots on the other

    The social brain: allowing humans to boldly go where no other species has been

    Get PDF
    The biological basis of complex human social interaction and communication has been illuminated through a coming together of various methods and disciplines. Among these are comparative studies of other species, studies of disorders of social cognition and developmental psychology. The use of neuroimaging and computational models has given weight to speculations about the evolution of social behaviour and culture in human societies. We highlight some networks of the social brain relevant to two-person interactions and consider the social signals between interacting partners that activate these networks.Wemake a case for distinguishing between signals that automatically trigger interaction and cooperation and ostensive signals that are used deliberately.We suggest that this ostensive signalling is needed for ‘closing the loop’ in two-person interactions, where the partners each know that they have the intention to communicate. The use of deliberate social signals can serve to increase reputation and trust and facilitates teaching. This is likely to be a critical factor in the steep cultural ascent ofmankind

    Combining intention and emotional state inference in a dynamic neural field architecture for human-robot joint action

    Get PDF
    We report on our approach towards creating socially intelligent robots, which is heavily inspired by recent experimental findings about the neurocognitive mechanisms underlying action and emotion understanding in humans. Our approach uses neuro-dynamics as a theoretical language to model cognition, emotional states, decision making and action. The control architecture is formalized by a coupled system of dynamic neural fields representing a distributed network of local but connected neural populations. Different pools of neurons encode relevant information in the form of self-sustained activation patterns, which are triggered by input from connected populations and evolve continuously in time. The architecture implements a dynamic and flexible context-dependent mapping from observed hand and facial actions of the human onto adequate complementary behaviors of the robot that take into account the inferred goal and inferred emotional state of the co-actor. The dynamic control architecture was validated in multiple scenarios in which an anthropomorphic robot and a human operator assemble a toy object from its components. The scenarios focus on the robot’s capacity to understand the human’s actions, and emotional states, detect errors and adapt its behavior accordingly by adjusting its decisions and movements during the execution of the task.The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was possible in part by the funding of research grants from the Portuguese Foundation for Science and Technology (grant numbers SFRH/BD/48527/2008, SFRH/BPD/71874/2010, SFRH/BD/81334/2011), and with funding from FP6-IST2 EU-IP Project JAST (project number 003747) and FP7 Marie Curie ITN Neural Engineering Transformative Technologies NETT (project number 289146).info:eu-repo/semantics/publishedVersio

    A dynamic neural field architecture for a pro-active assistant robot

    Get PDF
    We present a control architecture for non-verbal HRI that allows an assistant robot to have a pro-active and anticipatory behavior. The architecture implements the coordination of actions and goals among the human, that needs help, and the robot as a dynamic process that integrates contextual cues, shared task knowledge and predicted outcome of the human motor behavior. The robot control architecture is formalized by a coupled system of dynamic neural fields representing a distributed network of local but connected neural populations with specific functionalities. Different subpopulations encode task relevant information about action means, action goals and context in form of self-sustained activation patterns. These patterns are triggered by input from connected populations and evolve continuously in time under the influence of recurrent interactions. The dynamic control architecture is validated in an assistive task in which an anthropomorphic robot acts as a personal assistant of a person with motor impairments. We show that the context dependent mapping from action observation onto appropriate complementary actions allows the robot to cope with dynamically changing situations. This includes adaptation to different users and mutual compensation of physical limitations.Fundação para a CiĂȘncia e a Tecnologia (FCT) - POCI/V.5/A0119/2005fp6-IST2 EU-project JAST (proj.nr. 003747

    A socially assistive robot for people with motor impairments

    Get PDF
    We present a control architecture for nonverbal HRI that allows an anthropomorphic assistant robot with a pro-active and anticipatory behaviour. The control architecture coordinates action and goal coordination between a motor impaired human and the robot as a dynamic process that combines contextual cues, shared task knowledge, and predicted outcomes of the human behaviour. The control architecture is formalized through a coupled system of dynamic neural fields, representing a distributed network of local but connected neural populations with specific functionalities. Each subpopulation encodes relevant information about action means, goals, and context as self-sustained activation patterns. These patterns are triggered by the input and evolve continuously in time under the influence of recurrent interactions. The architecture is validated in an assistive task where the robot acts as an assistant of a person with motor impairments. We show that the context dependent mapping from action observation onto appropriate complementary actions allows the robot to cope with dynamically changing situations. This includes adaptation to different users and mutual compensation of physical limitations

    Social Cognition for Human-Robot Symbiosis—Challenges and Building Blocks

    Get PDF
    The next generation of robot companions or robot working partners will need to satisfy social requirements somehow similar to the famous laws of robotics envisaged by Isaac Asimov time ago (Asimov, 1942). The necessary technology has almost reached the required level, including sensors and actuators, but the cognitive organization is still in its infancy and is only partially supported by the current understanding of brain cognitive processes. The brain of symbiotic robots will certainly not be a “positronic” replica of the human brain: probably, the greatest part of it will be a set of interacting computational processes running in the cloud. In this article, we review the challenges that must be met in the design of a set of interacting computational processes as building blocks of a cognitive architecture that may give symbiotic capabilities to collaborative robots of the next decades: (1) an animated body-schema; (2) an imitation machinery; (3) a motor intentions machinery; (4) a set of physical interaction mechanisms; and (5) a shared memory system for incremental symbiotic development. We would like to stress that our approach is totally un-hierarchical: the five building blocks of the shared cognitive architecture are fully bi-directionally connected. For example, imitation and intentional processes require the “services” of the animated body schema which, on the other hand, can run its simulations if appropriately prompted by imitation and/or intention, with or without physical interaction. Successful experiences can leave a trace in the shared memory system and chunks of memory fragment may compete to participate to novel cooperative actions. And so on and so forth. At the heart of the system is lifelong training and learning but, different from the conventional learning paradigms in neural networks, where learning is somehow passively imposed by an external agent, in symbiotic robots there is an element of free choice of what is worth learning, driven by the interaction between the robot and the human partner. The proposed set of building blocks is certainly a rough approximation of what is needed by symbiotic robots but we believe it is a useful starting point for building a computational framework
    • 

    corecore