27 research outputs found

    Sensorimotor input as a language generalisation tool: a neurorobotics model for generation and generalisation of noun-verb combinations with sensorimotor inputs

    Get PDF
    The paper presents a neurorobotics cognitive model explaining the understanding and generalisation of nouns and verbs combinations when a vocal command consisting of a verb-noun sentence is provided to a humanoid robot. The dataset used for training was obtained from object manipulation tasks with a humanoid robot platform; it includes 9 motor actions and 9 objects placing placed in 6 different locations), which enables the robot to learn to handle real-world objects and actions. Based on the multiple time-scale recurrent neural networks, this study demonstrates its generalisation capability using a large data-set, with which the robot was able to generalise semantic representation of novel combinations of noun-verb sentences, and therefore produce the corresponding motor behaviours. This generalisation process is done via the grounding process: different objects are being interacted, and associated, with different motor behaviours, following a learning approach inspired by developmental language acquisition in infants. Further analyses of the learned network dynamics and representations also demonstrate how the generalisation is possible via the exploitation of this functional hierarchical recurrent network

    Nephrotoxicity by trichothecene mycotoxins

    No full text

    Analysing the Multiple Timescale Recurrent Neural Network for Embodied Language Understanding

    No full text
    Abstract How the human brain understands natural language and how we can ex-ploit this understanding for building intelligent grounded language systems is open research. Recently, researchers claimed that language is embodied in most – if not all – sensory and sensorimotor modalities and that the brain’s architecture favours the emergence of language. In this chapter we investigate the characteristics of such an architecture and propose a model based on the Multiple Timescale Recurrent Neural Network, extended by embodied visual perception, and tested in a real world sce-nario. We show that such an architecture can learn the meaning of utterances with respect to visual perception and that it can produce verbal utterances that correctly describe previously unknown scenes. In addition we rigorously study the timescale mechanism (also known as hysteresis) and explore the impact of the architectural connectivity in the language acquisition task.
    corecore