2,502 research outputs found

    Symbol Emergence in Robotics: A Survey

    Full text link
    Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form a symbol system and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment and other systems. Understanding human social interactions and developing a robot that can smoothly communicate with human users in the long term, requires an understanding of the dynamics of symbol systems and is crucially important. The embodied cognition and social interaction of participants gradually change a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER is a constructive approach towards an emergent symbol system. The emergent symbol system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e., humans and developmental robots. Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and their embodied meanings from raw sensory--motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions of research in SER.Comment: submitted to Advanced Robotic

    Affective Communication for Socially Assistive Robots (SARs) for Children with Autism Spectrum Disorder: A Systematic Review

    Get PDF
    Research on affective communication for socially assistive robots has been conducted to enable physical robots to perceive, express, and respond emotionally. However, the use of affective computing in social robots has been limited, especially when social robots are designed for children, and especially those with autism spectrum disorder (ASD). Social robots are based on cognitiveaffective models, which allow them to communicate with people following social behaviors and rules. However, interactions between a child and a robot may change or be different compared to those with an adult or when the child has an emotional deficit. In this study, we systematically reviewed studies related to computational models of emotions for children with ASD. We used the Scopus, WoS, Springer, and IEEE-Xplore databases to answer different research questions related to the definition, interaction, and design of computational models supported by theoretical psychology approaches from 1997 to 2021. Our review found 46 articles; not all the studies considered children or those with ASD.This research was funded by VRIEA-PUCV, grant number 039.358/202

    Learning semantic representations through multimodal graph neural networks

    Get PDF
    Proyecto de Graduación (Licenciatura en Ingeniería Mecatrónica) Instituto Tecnológico de Costa Rica. Área Académica de Ingeniería Mecatrónica, 2021Para proporcionar del conocimiento semántico sobre los objetos con los que van a interactuar los sistemas robóticos, se debe abordar el problema del aprendizaje de las representaciones semánticas a partir de las modalidades del lenguaje y la visión. El conocimiento semántico se refiere a la información conceptual, incluida la información semántica (significado) y léxica (palabra), y que proporciona la base para muchos de nuestros comportamientos no verbales cotidianos. Por lo tanto, es necesario desarrollar métodos que permitan a los robots procesar oraciones en un entorno del mundo real, por lo que este proyecto presenta un enfoque novedoso que utiliza Redes Convolucionales Gráficas para aprender representaciones de palabras basadas en el significado. El modelo propuesto consta de una primera capa que codifica representaciones unimodales y una segunda capa que integra estas representaciones unimodales en una para aprender una representación desde ambas modalidades. Los resultados experimentales muestran que el modelo propuesto supera al estado del arte en similitud semántica y que tiene la capacidad de simular juicios de similitud humana. Hasta donde sabemos, este enfoque es novedoso en el uso de Redes Convolucionales Gráficas para mejorar la calidad de las representaciones de palabras.To provide semantic knowledge about the objects that robotic systems are going to interact with, you must address the problem of learning semantic representations from modalities of language and vision. Semantic knowledge refers to conceptual information, including semantic (meaning) and lexical (word) information, and that provides the basis for many of our everyday non-verbal behaviors. Therefore, it is necessary to develop methods that enable robots to process sentences in a real-world environment, so this project introduces a novel approach that uses Graph Convolutional Networks to learn grounded meaning representations of words. The proposed model consists of a first layer that encodes unimodal representations, and a second layer that integrates these unimodal representations into one to learn a representation from both modalities. Experimental results show that the proposed model outperforms that state-of-the-art in semantic similarity and that can simulate human similarity judgments. To the best of our knowledge, this approach is novel in its use of Graph Convolutional Networks to enhance the quality of word representations

    Emotion in Future Intelligent Machines

    Full text link
    Over the past decades, research in cognitive and affective neuroscience has emphasized that emotion is crucial for human intelligence and in fact inseparable from cognition. Concurrently, there has been a significantly growing interest in simulating and modeling emotion in robots and artificial agents. Yet, existing models of emotion and their integration in cognitive architectures remain quite limited and frequently disconnected from neuroscientific evidence. We argue that a stronger integration of emotion in robot models is critical for the design of intelligent machines capable of tackling real world problems. Drawing from current neuroscientific knowledge, we provide a set of guidelines for future research in artificial emotion and intelligent machines more generally

    Emerging Linguistic Functions in Early Infancy

    Get PDF
    This paper presents results from experimental studies on early language acquisition in infants and attempts to interpret the experimental results within the framework of the Ecological Theory of Language Acquisition (ETLA) recently proposed by (Lacerda et al., 2004a). From this perspective, the infant’s first steps in the acquisition of the ambient language are seen as a consequence of the infant’s general capacity to represent sensory input and the infant’s interaction with other actors in its immediate ecological environment. On the basis of available experimental evidence, it will be argued that ETLA offers a productive alternative to traditional descriptive views of the language acquisition process by presenting an operative model of how early linguistic function may emerge through interaction
    corecore