14,720 research outputs found

    Symbol Emergence in Robotics: A Survey

    Full text link
    Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form a symbol system and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment and other systems. Understanding human social interactions and developing a robot that can smoothly communicate with human users in the long term, requires an understanding of the dynamics of symbol systems and is crucially important. The embodied cognition and social interaction of participants gradually change a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER is a constructive approach towards an emergent symbol system. The emergent symbol system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e., humans and developmental robots. Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and their embodied meanings from raw sensory--motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions of research in SER.Comment: submitted to Advanced Robotic

    Computational and Robotic Models of Early Language Development: A Review

    Get PDF
    We review computational and robotics models of early language learning and development. We first explain why and how these models are used to understand better how children learn language. We argue that they provide concrete theories of language learning as a complex dynamic system, complementing traditional methods in psychology and linguistics. We review different modeling formalisms, grounded in techniques from machine learning and artificial intelligence such as Bayesian and neural network approaches. We then discuss their role in understanding several key mechanisms of language development: cross-situational statistical learning, embodiment, situated social interaction, intrinsically motivated learning, and cultural evolution. We conclude by discussing future challenges for research, including modeling of large-scale empirical data about language acquisition in real-world environments. Keywords: Early language learning, Computational and robotic models, machine learning, development, embodiment, social interaction, intrinsic motivation, self-organization, dynamical systems, complexity.Comment: to appear in International Handbook on Language Development, ed. J. Horst and J. von Koss Torkildsen, Routledg

    Interactive Robot Learning of Gestures, Language and Affordances

    Full text link
    A growing field in robotics and Artificial Intelligence (AI) research is human-robot collaboration, whose target is to enable effective teamwork between humans and robots. However, in many situations human teams are still superior to human-robot teams, primarily because human teams can easily agree on a common goal with language, and the individual members observe each other effectively, leveraging their shared motor repertoire and sensorimotor resources. This paper shows that for cognitive robots it is possible, and indeed fruitful, to combine knowledge acquired from interacting with elements of the environment (affordance exploration) with the probabilistic observation of another agent's actions. We propose a model that unites (i) learning robot affordances and word descriptions with (ii) statistical recognition of human gestures with vision sensors. We discuss theoretical motivations, possible implementations, and we show initial results which highlight that, after having acquired knowledge of its surrounding environment, a humanoid robot can generalize this knowledge to the case when it observes another agent (human partner) performing the same motor actions previously executed during training.Comment: code available at https://github.com/gsaponaro/glu-gesture

    Grounding action in visuo-haptic space using experience networks

    Get PDF
    Traditional approaches to the use of machine learning algorithms do not provide a method to learn multiple tasks in one-shot on an embodied robot. It is proposed that grounding actions within the sensory space leads to the development of action-state relationships which can be re-used despite a change in task. A novel approach called an Experience Network is developed and assessed on a real-world robot required to perform three separate tasks. After grounded representations were developed in the initial task, only minimal further learning was required to perform the second and third task

    Learning from sensory predictions for autonomous and adaptive exploration of object shape with a tactile robot

    Get PDF
    Humans use information from sensory predictions, together with current observations, for the optimal exploration and recognition of their surrounding environment. In this work, two novel adaptive perception strategies are proposed for accurate and fast exploration of object shape with a robotic tactile sensor. These strategies called (1) adaptive weighted prior and (2) adaptive weighted posterior, combine tactile sensory predictions and current sensor observations to autonomously adapt the accuracy and speed of active Bayesian perception in object exploration tasks. Sensory predictions, obtained from a forward model, use a novel Predicted Information Gain method. These predictions are used by the tactile robot to analyse ‘what would have happened’ if certain decisions ‘would have been made’ at previous decision times. The accuracy of predictions is evaluated and controlled by a confidence parameter, to ensure that the adaptive perception strategies rely more on predictions when they are accurate, and more on current sensory observations otherwise. This work is systematically validated with the recognition of angle and position data extracted from the exploration of object shape, using a biomimetic tactile sensor and a robotic platform. The exploration task implements the contour following procedure used by humans to extract object shape with the sense of touch. The validation process is performed with the adaptive weighted strategies and active perception alone. The adaptive approach achieved higher angle accuracy (2.8 deg) over active perception (5 deg). The position accuracy was similar for all perception methods (0.18 mm). The reaction time or number of tactile contacts, needed by the tactile robot to make a decision, was improved by the adaptive perception (1 tap) over active perception (5 taps). The results show that the adaptive perception strategies can enable future robots to adapt their performance, while improving the trade-off between accuracy and reaction time, for tactile exploration, interaction and recognition tasks

    Learning from sensory predictions for autonomous and adaptive exploration of object shape with a tactile robot

    Get PDF
    Humans use information from sensory predictions, together withcurrent observations, for the optimal exploration and recognition oftheir surrounding environment. In this work, two novel adaptiveperception strategies are proposed for accurate and fast exploration ofobject shape with a robotic tactile sensor. These strategies called 1)adaptive weighted prior and 2) adaptive weighted posterior, combinetactile sensory predictions and current sensor observations toautonomously adapt the accuracy and speed of active Bayesian perceptionin object exploration tasks. Sensory predictions, obtained from a forwardmodel, use a novel Predicted Information Gain method. These predictionsare used by the tactile robot to analyse `what would have happened' ifcertain decisions `would have been made' at previous decision times. Theaccuracy of predictions is evaluated and controlled by a confidenceparameter, to ensure that the adaptive perception strategies rely more onpredictions when they are accurate, and more on current sensoryobservations otherwise. This work is systematically validated with therecognition of angle and position data extracted from the exploration ofobject shape, using a biomimetic tactile sensor and a robotic platform.The exploration task implements the contour following procedure used byhumans to extract object shape with the sense of touch. The validationprocess is performed with the adaptive weighted strategies and activeperception alone. The adaptive approach achieved higher angle accuracy(2.8 deg) over active perception (5 deg). The position accuracy wassimilar for all perception methods (0.18 mm). The reaction time or numberof tactile contacts, needed by the tactile robot to make a decision, wasimproved by the adaptive perception (1 tap) over active perception (5taps). The results show that the adaptive perception strategies canenable future robots to adapt their performance, while improving thetrade-off between accuracy and reaction time, for tactile exploration,interaction and recognition tasks

    Adaptive perception: learning from sensory predictions to extract object shape with a biomimetic fingertip

    Get PDF
    In this work, we present an adaptive perception method to improve the performance in accuracy and speed of a tactile exploration task. This work extends our previous studies on sensorimotor control strategies for active tactile perception in robotics. First, we present the active Bayesian perception method to actively reposition a robot to accumulate evidence from better locations to reduce uncertainty. Second, we describe the adaptive perception method that, based on a forward model and a predicted information gain approach, allows to the robot to analyse `what would have happened' if a different decision `would have been made' at previous decision time. This approach permits to adapt the active Bayesian perception process to improve the performance in accuracy and reaction time of an exploration task. Our methods are validated with a contour following exploratory procedure with a touch sensor. The results show that the adaptive perception method allows the robot to make sensory predictions and autonomously adapt, improving the performance of the exploration task
    • …
    corecore