743 research outputs found

    Cognitive Robots for Social Interactions

    Get PDF
    One of my goals is to work towards developing Cognitive Robots, especially with regard to improving the functionalities that facilitate the interaction with human beings and their surrounding objects. Any cognitive system designated for serving human beings must be capable of processing the social signals and eventually enable efficient prediction and planning of appropriate responses. My main focus during my PhD study is to bridge the gap between the motoric space and the visual space. The discovery of the mirror neurons ([RC04]) shows that the visual perception of human motion (visual space) is directly associated to the motor control of the human body (motor space). This discovery poses a large number of challenges in different fields such as computer vision, robotics and neuroscience. One of the fundamental challenges is the understanding of the mapping between 2D visual space and 3D motoric control, and further developing building blocks (primitives) of human motion in the visual space as well as in the motor space. First, I present my study on the visual-motoric mapping of human actions. This study aims at mapping human actions in 2D videos to 3D skeletal representation. Second, I present an automatic algorithm to decompose motion capture (MoCap) sequences into synergies along with the times at which they are executed (or "activated") for each joint. Third, I proposed to use the Granger Causality as a tool to study the coordinated actions performed by at least two units. Recent scientific studies suggest that the above "action mirroring circuit" might be tuned to action coordination rather than single action mirroring. Fourth, I present the extraction of key poses in visual space. These key poses facilitate the further study of the "action mirroring circuit". I conclude the dissertation by describing the future of cognitive robotics study

    An Open-Source Simulator for Cognitive Robotics Research: The Prototype of the iCub Humanoid Robot Simulator

    Get PDF
    This paper presents the prototype of a new computer simulator for the humanoid robot iCub. The iCub is a new open-source humanoid robot developed as a result of the “RobotCub” project, a collaborative European project aiming at developing a new open-source cognitive robotics platform. The iCub simulator has been developed as part of a joint effort with the European project “ITALK” on the integration and transfer of action and language knowledge in cognitive robots. This is available open-source to all researchers interested in cognitive robotics experiments with the iCub humanoid platform

    Modelling mental rotation in cognitive robots

    Get PDF
    Mental rotation concerns the cognitive processes that allow an agent mentally to rotate the image of an object in order to solve a given task, for example to say if two objects with different orientations are the same or different. Here we present a system-level bio-constrained model, developed within a neurorobotics framework, that provides an embodied account of mental rotation processes relying on neural mechanisms involving motor affordance encoding, motor simulation and the anticipation of the sensory consequences of actions (both visual and proprioceptive). This model and methodology are in agreement with the most recent theoretical and empirical research on mental rotation. The model was validated through experiments with a simulated humanoid robot (iCub) engaged in solving a classical mental rotation test. The results of the test show that the robot is able to solve the task and, in agreement with data from psychology experiments, exhibits response times linearly dependent on the angular disparity between the objects. This model represents a novel detailed operational account of the embodied brain mechanisms that may underlie mental rotation. © The Author(s) 2013

    What Can I Do Around Here? Deep Functional Scene Understanding for Cognitive Robots

    Full text link
    For robots that have the capability to interact with the physical environment through their end effectors, understanding the surrounding scenes is not merely a task of image classification or object recognition. To perform actual tasks, it is critical for the robot to have a functional understanding of the visual scene. Here, we address the problem of localizing and recognition of functional areas from an arbitrary indoor scene, formulated as a two-stage deep learning based detection pipeline. A new scene functionality testing-bed, which is complied from two publicly available indoor scene datasets, is used for evaluation. Our method is evaluated quantitatively on the new dataset, demonstrating the ability to perform efficient recognition of functional areas from arbitrary indoor scenes. We also demonstrate that our detection model can be generalized onto novel indoor scenes by cross validating it with the images from two different datasets

    Interactive Robot Learning of Gestures, Language and Affordances

    Full text link
    A growing field in robotics and Artificial Intelligence (AI) research is human-robot collaboration, whose target is to enable effective teamwork between humans and robots. However, in many situations human teams are still superior to human-robot teams, primarily because human teams can easily agree on a common goal with language, and the individual members observe each other effectively, leveraging their shared motor repertoire and sensorimotor resources. This paper shows that for cognitive robots it is possible, and indeed fruitful, to combine knowledge acquired from interacting with elements of the environment (affordance exploration) with the probabilistic observation of another agent's actions. We propose a model that unites (i) learning robot affordances and word descriptions with (ii) statistical recognition of human gestures with vision sensors. We discuss theoretical motivations, possible implementations, and we show initial results which highlight that, after having acquired knowledge of its surrounding environment, a humanoid robot can generalize this knowledge to the case when it observes another agent (human partner) performing the same motor actions previously executed during training.Comment: code available at https://github.com/gsaponaro/glu-gesture

    Reasoning with BDI robots: from simulation to physical environment – implementations and limitations

    Get PDF
    In this paper an overview of the state of research into cognitive robots is given. This is driven by insights arising from research that has moved from simulation to physical robots over the course of a number of sub-projects. A number of major issues arising from seminal research in the area are explored. In particular in the context of advances in the field of robotics and a slowly developing model of cognition and behaviour that is being mapped onto robot colonies. The work presented is ongoing but major themes such as the veracity of data and information, and their effect on robot control architectures are explored. A small number of case studies are presented where the theoretical framework has been used to implement control of physical robots. The limitations of the current research and the wider field of behavioral and cognitive robots are explored

    Supporting communication among cognitive robots in simulated environments

    Get PDF
    Despite the fact that the Khepera II is an experimental platform widely used in the scientific community related to robotics research, its application to cognitive robotics is not as extensive as it should be. Particularly, in this field of research, the Khe-DeLP framework has arisen as an interesting proposal for developing cognitive agents to control real and simulated Khepera II robots. Although Khe-DeLP allows to work with multiple robots within the same environment, at present, only nonintentional communication among them can be achieved in this framework. Therefore, in this work we present extentions to Khe-DeLP to be able to model simulated scenarios where multiple robots interact by using explicit communication among them. This new feature improves Khe-DeLP since any kind of coordination problems can be simulated within the framework. As concept test, an example is presented which aims to validate coordinated behaviours of the robots by using the new communication features included in Khe-DeLP.Presentado en el XII Workshop Agentes y Sistemas Inteligentes (WASI)Red de Universidades con Carreras en Informática (RedUNCI

    An embodied and grounded perspective on concepts

    Get PDF
    By the mainstream view in psychology and neuroscience, concepts are informational units, rather stable, and are represented in propositional format. In the view I will outline, instead, concepts correspond to patterns of activation of the perception, action and emotional systems which are typically activated when we interact with the entities they refer to. Starting from this embodied and grounded approach to concepts, I will focus on different research lines and present some experimental evidence concerning concepts of objects, concepts of actions, and abstract concepts. I will argue that, in order to account for abstract concepts, embodied and grounded theories should be extended

    Knowledge representation and exploitation for interactive and cognitive robots

    Get PDF
    L'arrivée des robots dans notre vie quotidienne fait émerger le besoin pour ces systèmes d'avoir accès à une représentation poussée des connaissances et des capacités de raisonnements associées. Ainsi, les robots doivent pouvoir comprendre les éléments qui composent l'environnement dans lequel ils évoluent. De plus, la présence d'humains dans ces environnements et donc la nécessité d'interagir avec eux amènent des exigences supplémentaires. Ainsi, les connaissances ne sont plus utilisées par le robot dans le seul but d'agir physiquement sur son environnement mais aussi dans un but de communication et de partage d'information avec les humains. La connaissance ne doit plus être uniquement compréhensible par le robot lui-même mais doit aussi pouvoir être exprimée. Dans la première partie de cette thèse, nous présentons Ontologenius. C'est un logiciel permettant de maintenir des bases de connaissances sous forme d'ontologie, de raisonner dessus et de les gérer dynamiquement. Nous commençons par expliquer en quoi ce logiciel est adapté aux applications d'interaction humain-robot (HRI), notamment avec la possibilité de représenter la base de connaissances du robot mais aussi une estimation des bases de connaissances des partenaires humains ce qui permet d'implémenter les mécanismes de théorie de l'esprit. Nous poursuivons avec une présentation de ses interfaces. Cette partie se termine par une analyse des performances du système ainsi développé. Dans une seconde partie, cette thèse présente notre contribution à deux problèmes d'exploration des connaissances: l'un ayant trait au référencement spatial et l'autre à l'utilisation de connaissances sémantiques. Nous commençons par une tâche de description d'itinéraires pour laquelle nous proposons une ontologie permettant de décrire la topologie d'environnements intérieurs et deux algorithmes de recherche d'itinéraires. Nous poursuivons avec une tâche de génération d'expression de référence. Cette tâche vise à sélectionner l'ensemble optimal d'informations à communiquer afin de permettre à un auditeur d'identifier l'entité référencée dans un contexte donné. Ce dernier algorithme est ensuite affiné pour y ajouter les informations sur les activités passées provenant d'une action conjointe entre un robot et un humain, afin de générer des expressions encore plus pertinentes. Il est également intégré à un planificateur de tâches symbolique pour estimer la faisabilité et le coût des futures communications. Cette thèse se termine par la présentation de deux architectures cognitives, la première utilisant notre contribution concernant la description d'itinéraire et la seconde utilisant nos contributions autour de la Génération d'Expression de Référence. Les deux utilisent Ontologenius pour gérer la base de connaissances sémantique. À travers ces deux architectures, nous présentons comment nos travaux ont amené la base de connaissances a progressivement prendre un rôle central, fournissant des connaissances à tous les composants du système.As robots begin to enter our daily lives, we need advanced knowledge representations and associated reasoning capabilities to enable them to understand and model their environments. Considering the presence of humans in such environments, and therefore the need to interact with them, this need comes with additional requirements. Indeed, knowledge is no longer used by the robot for the sole purpose of being able to act physically on the environment but also to communicate and share information with humans. Therefore knowledge should no longer be understandable only by the robot itself, but should also be able to be narrative-enabled. In the first part of this thesis, we present our first contribution with Ontologenius. This software allows to maintain knowledge bases in the form of ontology, to reason on them and to manage them dynamically. We start by explaining how this software is suitable for \acrfull{hri} applications. To that end, for example to implement theory of mind abilities, it is possible to represent the robot's knowledge base as well as an estimate of the knowledge bases of human partners. We continue with a presentation of its interfaces. This part ends with a performance analysis, demonstrating its online usability. In a second part, we present our contribution to two knowledge exploration problems around the general topic of spatial referring and the use of semantic knowledge. We start with the route description task which aims to propose a set of possible routes leading to a target destination, in the framework of a guiding task. To achieve this task, we propose an ontology allowing us to describe the topology of indoor environments and two algorithms to search for routes. The second knowledge exploration problem we tackle is the \acrfull{reg} problem. It aims at selecting the optimal set of piece of information to communicate in order to allow a hearer to identify the referred entity in a given context. This contribution is then refined to use past activities coming from joint action between a robot and a human, in order to generate new kinds of Referring Expressions. It is also linked with a symbolic task planner to estimate the feasibility and cost of future communications. We conclude this thesis by the presentation of two cognitive architectures. The first one uses the route description contribution and the second one takes advantage of our Referring Expression Generation contribution. Both of them use Ontologenius to manage the semantic Knowledge Base. Through these two architectures, we present how our contributions enable Knowledge Base to gradually take a central role, providing knowledge to all the components of the architectures
    • …
    corecore