17 research outputs found

    From Human to Robot Interactions: A Circular Approach towards Trustworthy Social Robots

    Full text link
    Human trust research uncovered important catalysts for trust building between interaction partners such as appearance or cognitive factors. The introduction of robots into social interactions calls for a reevaluation of these findings and also brings new challenges and opportunities. In this paper, we suggest approaching trust research in a circular way by drawing from human trust findings, validating them and conceptualizing them for robots, and finally using the precise manipulability of robots to explore previously less-explored areas of trust formation to generate new hypotheses for trust building between agents.Comment: In SCRITA 2023 Workshop Proceedings (arXiv:2311.05401) held in conjunction with 32nd IEEE International Conference on Robot & Human Interactive Communication, 28/08 - 31/08 2023, Busan (Korea

    The iCub multisensor datasets for robot and computer vision applications

    Full text link
    This document presents novel datasets, constructed by employing the iCub robot equipped with an additional depth sensor and color camera. We used the robot to acquire color and depth information for 210 objects in different acquisition scenarios. At this end, the results were large scale datasets for robot and computer vision applications: object representation, object recognition and classification, and action recognition.Comment: 6 pages, 6 figure

    Emotion as an emergent phenomenon of the neurocomputational energy regulation mechanism of a cognitive agent in a decision-making task:

    Get PDF
    Biological agents need to complete perception-action cycles to perform various cognitive and biological tasks such as maximizing their wellbeing and their chances of genetic continuation. However, the processes performed in these cycles come at a cost. Such costs force the agent to evaluate a tradeoff between the optimality of the decision making and the time and computational effort required to make it. Several cognitive mechanisms that play critical roles in managing this tradeoff have been identified. These mechanisms include adaptation, learning, memory, attention, and planning. One of the often overlooked outcomes of these cognitive mechanisms, in spite of the critical effect that they may have on the perception-action cycle of organisms, is "emotion." In this study, we hold that emotion can be considered as an emergent phenomenon of a plausible neurocomputational energy regulation mechanism, which generates an internal reward signal to minimize the neural energy consumption of a sequence of actions (decisions), where each action triggers a visual memory recall process. To realize an optimal action selection over a sequence of actions in a visual recalling task, we adopted a model-free reinforcement learning framework, in which the reward signal—that is, the cost—was based on the iteration steps of the convergence state of an associative memory network. The proposed mechanism has been implemented in simulation and on a robotic platform: the iCub humanoid robot. The results show that the computational energy regulation mechanism enables the agent to modulate its behavior to minimize the required neurocomputational energy in performing the visual recalling task

    Neural representation in F5: cross-decoding from observation to execution

    Full text link

    Robots facilitate human language production

    Get PDF
    Despite recent developments in integrating autonomous and human-like robots into many aspects of everyday life, social interactions with robots are still a challenge. Here, we focus on a central tool for social interaction: verbal communication. We assess the extent to which humans co-represent (simulate and predict) a robot’s verbal actions. During a joint picture naming task, participants took turns in naming objects together with a social robot (Pepper, Softbank Robotics). Previous findings using this task with human partners revealed internal simulations on behalf of the partner down to the level of selecting words from the mental lexicon, reflected in partner-elicited inhibitory effects on subsequent naming. Here, with the robot, the partner-elicited inhibitory effects were not observed. Instead, naming was facilitated, as revealed by faster naming of word categories co-named with the robot. This facilitation suggests that robots, unlike humans, are not simulated down to the level of lexical selection. Instead, a robot’s speaking appears to be simulated at the initial level of language production where the meaning of the verbal message is generated, resulting in facilitated language production due to conceptual priming. We conclude that robots facilitate core conceptualization processes when humans transform thoughts to language during speaking.Peer Reviewe

    Bringing Together Robotics, Neuroscience, and Psychology: Lessons Learned From an Interdisciplinary Project

    Get PDF
    The diversified methodology and expertise of interdisciplinary research teams provide the opportunity to overcome the limited perspectives of individual disciplines. This is particularly true at the interface of Robotics, Neuroscience, and Psychology as the three fields have quite different perspectives and approaches to offer. Nonetheless, aligning backgrounds and interdisciplinary expectations can present challenges due to varied research cultures and practices. Overcoming these challenges stands at the beginning of each productive collaboration and thus is a mandatory step in cognitive neurorobotics. In this article, we share eight lessons that we learned from our ongoing interdisciplinary project on human-robot and robot-robot interaction in social settings. These lessons provide practical advice for scientists initiating interdisciplinary research endeavors. Our advice can help to avoid early problems and deal with differences between research fields, prepare for and anticipate challenges, align project expectations, and speed up research progress, thus promoting effective interdisciplinary research across Robotics, Neuroscience, and Psychology.Peer Reviewe

    Connecting Artificial Brains to Robots in a Comprehensive Simulation Framework: The Neurorobotics Platform

    Get PDF
    Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain–body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain Project (HBP).1 At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 604102 (Human Brain Project) and from the European Unions Horizon 2020 Research and Innovation Programme under Grant Agreement No. 720270 (HBP SGA1)

    iCub! Do you recognize what I am doing?: multimodal human action recognition on multisensory-enabled iCub robot

    Get PDF
    This study uses multisensory data (i.e., color and depth) to recognize human actions in the context of multimodal human-robot interaction. Here we employed the iCub robot to observe the predefined actions of the human partners by using four different tools on 20 objects. We show that the proposed multimodal ensemble learning leverages complementary characteristics of three color cameras and one depth sensor that improves, in most cases, recognition accuracy compared to the models trained with a single modality. The results indicate that the proposed models can be deployed on the iCub robot that requires multimodal action recognition, including social tasks such as partner-specific adaptation, and contextual behavior understanding, to mention a few

    Trust in robot-robot scaffolding

    No full text
    The study of robot trust in humans and other agents is not explored widely despite its importance for the near future human-robot symbiotic societies. Here we propose that robots should trust partners that tend to reduce their computational load, which is analogous to human cognitive load. We test this idea by adopting an interactive visual recalling task. In the first set of experiments, the robot can get help from online instructors with different guiding strategies to decide which one it should trust based on the computational load it experiences during the experiments. The second set of experiments involves robot-robot interactions. Akin to the robot-online instructor case, the Pepper robot is asked to scaffold the learning of a less capable ‘infant’ robot (Nao) with or without being equipped with the cognitive abilities of theory of mind and task experience memory to assess the contribution of these cognitive abilities to scaffolding performance. Overall, the results show that robot trust based on computational/cognitive load within a sequential decision-making framework leads to effective partner selection and robot-robot scaffolding. Thus, using the computational load incurred by the cognitive processing of a robot may serve as an internal signal for assessing the trustworthiness of interaction partners
    corecore