366 research outputs found

    Cross-situational and supervised learning in the emergence of communication

    Full text link
    Scenarios for the emergence or bootstrap of a lexicon involve the repeated interaction between at least two agents who must reach a consensus on how to name N objects using H words. Here we consider minimal models of two types of learning algorithms: cross-situational learning, in which the individuals determine the meaning of a word by looking for something in common across all observed uses of that word, and supervised operant conditioning learning, in which there is strong feedback between individuals about the intended meaning of the words. Despite the stark differences between these learning schemes, we show that they yield the same communication accuracy in the realistic limits of large N and H, which coincides with the result of the classical occupancy problem of randomly assigning N objects to H words

    Towards the Grounding of Abstract Words: A Neural Network Model for Cognitive Robots

    Get PDF
    In this paper, a model based on Artificial Neural Networks (ANNs) extends the symbol grounding mechanism toabstract words for cognitive robots. The aim of this work is to obtain a semantic representation of abstract concepts through the grounding in sensorimotor experiences for a humanoid robotic platform. Simulation experiments have been developed on a software environment for the iCub robot. Words that express general actions with a sensorimotor component are first taught to the simulated robot. During the training stage the robot first learns to perform a set of basic action primitives through the mechanism of direct grounding. Subsequently, the grounding of action primitives, acquired via direct sensorimotor experience, is transferred to higher-order words via linguistic descriptions. The idea is that by combining words grounded in sensorimotor experience the simulated robot can acquire more abstract concepts. The experiments aim to teach the robot the meaning of abstract words by making it experience sensorimotor actions. The iCub humanoid robot will be used for testing experiments on a real robotic architecture

    Co-exploring Actuator Antagonism and Bio-inspired Control in a Printable Robot Arm

    Get PDF
    The human arm is capable of performing fast targeted movements with high precision, say in pointing with a mouse cursor, but is inherently ‘soft’ due to the muscles, tendons and other tissues of which it is composed. Robot arms are also becoming softer, to enable robustness when operating in real-world environments, and to make them safer to use around people. But softness comes at a price, typically an increase in the complexity of the control required for a given task speed/accuracy requirement. Here we explore how fast and precise joint movements can be simply and effectively performed in a soft robot arm, by taking inspiration from the human arm. First, viscoelastic actuator-tendon systems in an agonist-antagonist setup provide joints with inherent damping, and stiffness that can be varied in real-time through co-contraction. Second, a light-weight and learnable inverse model for each joint enables a fast ballistic phase that drives the arm close to a desired equilibrium point and co-contraction tuple, while the final adjustment is done by a feedback controller. The approach is embodied in the GummiArm, a robot which can almost entirely be printed on hobby-grade 3D printers. This enables rapid and iterative co-exploration of ‘brain’ and ‘body’, and provides a great platform for developing adaptive and bio-inspired behaviours

    Implementation of a Modular Growing When Required Neural Gas Architecture for Recognition of Falls

    Get PDF
    In this paper we aim for the replication of a state of the art architecture for recognition of human actions using skeleton poses obtained from a depth sensor. We review the usefulness of accurate human action recognition in the field of robotic elderly care, focusing on fall detection. We attempt fall recognition using a chained Growing When Required neural gas classifier that is fed only skeleton joints data. We test this architecture against Recurrent SOMs (RSOMs) to classify the TST Fall detection database ver. 2, a specialised dataset for fall sequences. We also introduce a simplified mathematical model of falls for easier and faster bench-testing of classification algorithms for fall detection. The outcome of classifying falls from our mathematical model was successful with an accuracy of 97.12±1.65% and from the TST Fall detection database ver. 2 with an accuracy of 90.2±2.68% when a filter was added.CNP

    Toward the next generation of research into small area effects on health : a synthesis of multilevel investigations published since July 1998.

    Get PDF
    To map out area effects on health research, this study had the following aims: (1) to inventory multilevel investigations of area effects on self rated health, cardiovascular diseases and risk factors, and mortality among adults; (2) to describe and critically discuss methodological approaches employed and results observed; and (3) to formulate selected recommendations for advancing the study of area effects on health. Overall, 86 studies were inventoried. Although several innovative methodological approaches and analytical designs were found, small areas are most often operationalised using administrative and statistical spatial units. Most studies used indicators of area socioeconomic status derived from censuses, and few provided information on the validity and reliability of measures of exposures. A consistent finding was that a significant portion of the variation in health is associated with area context independently of individual characteristics. Area effects on health, although significant in most studies, often depend on the health outcome studied, the measure of area exposure used, and the spatial scale at which associations are examined

    A global workspace theory model for trust estimation in human-robot interaction

    Get PDF
    Successful and genuine social connections between humans are based on trust, even more when the people involved have to collaborate to reach a shared goal. With the advent of new findings and technologies in the field of robotics, it appears that this same key factor that regulates relationships between humans also applies with the same importance to human-robot interactions (HRI). Previous studies have proven the usefulness of a robot able to estimate the trustworthiness of its human collaborators and in this position paper we discuss a method to extend an existing state-of-the-art trust model with considerations based on social cues such as emotions. The proposed model follows the Global Workspace Theory (GWT) principles to build a novel system able to combine multiple specialised expert systems to determine whether the partner can be considered trustworthy or not. Positive results would demonstrate the usefulness of using constructive biases to enhance the teaming skills of social robots

    When object color is a red herring: extraneous perceptual information hinders word learning via referent selection

    Get PDF
    Learning words from ambiguous naming events is difficult. In such situations, children struggle with not attending to task irrelevant information when learning object names. The current study reduces the problem space of learning names for object categories by holding color constant between the target and other extraneous objects. We examine how this influences two types of word learning (retention and generalization) in both 30-month-old children (Experiment 1) and the iCub humanoid robot (Experiment 2). Overall, all children and iCub performed well on the retention trials, but they were only able to generalize the novel names to new exemplars of the target categories if the objects were originally encountered in sets with objects of the same colors, not if the objects were originally encountered in sets with objects of different colors. These data demonstrate that less information presented during the learning phase narrows the problem space and leads to better word learning success for both children and iCub. Findings are discussed in terms of cognitive load and desirable difficulties

    Posture Affects How Robots and Infants Map Words to Objects

    Get PDF
    For infants, the first problem in learning a word is to map the word to its referent; a second problem is to remember that mapping when the word and/or referent are again encountered. Recent infant studies suggest that spatial location plays a key role in how infants solve both problems. Here we provide a new theoretical model and new empirical evidence on how the body – and its momentary posture – may be central to these processes. The present study uses a name-object mapping task in which names are either encountered in the absence of their target (experiments 1–3, 6 & 7), or when their target is present but in a location previously associated with a foil (experiments 4, 5, 8 & 9). A humanoid robot model (experiments 1–5) is used to instantiate and test the hypothesis that body-centric spatial location, and thus the bodies’ momentary posture, is used to centrally bind the multimodal features of heard names and visual objects. The robot model is shown to replicate existing infant data and then to generate novel predictions, which are tested in new infant studies (experiments 6–9). Despite spatial location being task-irrelevant in this second set of experiments, infants use body-centric spatial contingency over temporal contingency to map the name to object. Both infants and the robot remember the name-object mapping even in new spatial locations. However, the robot model shows how this memory can emerge –not from separating bodily information from the word-object mapping as proposed in previous models of the role of space in word-object mapping – but through the body’s momentary disposition in space

    Integration of Action and Language Knowledge: A Roadmap for Developmental Robotics

    Get PDF
    “This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”This position paper proposes that the study of embodied cognitive agents, such as humanoid robots, can advance our understanding of the cognitive development of complex sensorimotor, linguistic, and social learning skills. This in turn will benefit the design of cognitive robots capable of learning to handle and manipulate objects and tools autonomously, to cooperate and communicate with other robots and humans, and to adapt their abilities to changing internal, environmental, and social conditions. Four key areas of research challenges are discussed, specifically for the issues related to the understanding of: 1) how agents learn and represent compositional actions; 2) how agents learn and represent compositional lexica; 3) the dynamics of social interaction and learning; and 4) how compositional action and language representations are integrated to bootstrap the cognitive system. The review of specific issues and progress in these areas is then translated into a practical roadmap based on a series of milestones. These milestones provide a possible set of cognitive robotics goals and test scenarios, thus acting as a research roadmap for future work on cognitive developmental robotics.Peer reviewe
    • 

    corecore