139,214 research outputs found

    Incorporating action information into computational models of the human visual system

    Get PDF
    Deep convolutional neural networks (DCNNs) have been used to model the ventral visual stream. However, there have been relatively few computational models of the dorsal visual stream, preventing a wholistic understanding of the human visual system. Additionally, current DCNN models of the ventral stream have shortcomings (such as an over-reliance on texture data) which can be ameliorated by incorporating dorsal stream information. The current study aims to investigate two questions: 1) does incorporating action information improve computational models of the ventral visual system? 2) how do the ventral and dorsal streams influence each other during development? Three models will be created: a two-task neural network trained to both perform object recognition and to generate human grasp points; a single-task neural network trained to perform only image recognition; and a lesioned neural network, which will be identical to the two-task neural network except that the units with greatest representation contribution towards grasp point generation will be deactivated. All networks will be evaluated on performance metrics such as accuracy (evaluated with ImageNet and Stylized-ImageNet), transfer learning, and robustness against distortions. The networks will also be evaluated on representational metrics such as representation contribution analysis and representational similarity analysis. We expect the two-task network will score higher on performance measures than either the lesioned or single-task network. Additionally, for the two-task network we predict more units will contribute towards grasp point generation than object recognition. Lastly, we expect representations in the two-task network to better reflect human data than the single-task network

    Online Language Learning to Perform and Describe Actions for Human-Robot Interaction

    Get PDF
    International audienceThe goal of this research is to provide a real-time and adaptive spoken langue interface between humans and a humanoid robot. The system should be able to learn new grammatical constructions in real-time, and then use them immediately following or in a later interactive session. In order to achieve this we use a recurrent neural network of 500 neurons-echo state network with leaky neurons [1]. The model processes sentences as grammatical constructions, in which the semantic words (nouns and verbs) are extracted and stored in working memory, and the grammatical words (prepositions, auxiliary verbs, etc.) are inputs to the network. The trained network outputs code the role (predicate, agent, object/location) that each semantic word takes. In the final output, the stored semantic words are then mapped onto their respective roles. The model thus learns the mappings between the grammatical structure of sentences and their meanings. The humanoid robot is an iCub [2] who interacts around a instrumented tactile table (ReacTable TM) on which objects can be manipulated by both human and robot. A sensory system has been developed to extract spatial relations. A speech recognition and text to speech off-the-shelf tool allows spoken communication. In parallel, the robot has a small set of actions (put(object, location), grasp(object), point(object)). These spatial relations, and action definitions form the meanings that are to be linked to sentences in the learned grammatical constructions. The target behavior of the system is to learn two conditions. In action performing (AP), the system should learn to generate the proper robot command, given a spoken input sentence. In scene description (SD), the system should learn to describe scenes given the extracted spatial relation. Training corpus for the neural model can be generated by the interaction with the user teaching the robot by describing spatial relations or actions, creating pairs. It could also be edited by hand to avoid speech recognition errors. These interactions between the different components of the system are shown in the Figure 1. The neural model processes grammatical constructions where semantic words (e.g. put, grasp, toy, left, right) are replaced by a common marker. This is done with only a predefined set of grammatical words (after, and, before, it, on, the, then, to, you). Therefore the model is able to deal with sentences that have the same constructions than previously seen sentences. In the AP condition, we demonstrate that the model can learn and generalize to complex sentences including "Before you put the toy on the left point the drums."; the robot will first point the drums and then put the toy on the left: showing here that the network is able to establish the proper chronological order of actions. Likewise, in the SD condition, the system can be exposed to a new scene and produce a description such as "To the left of the drums and to the right of the toy is the trumpet." In future research we can exploit this learning system in the context of human language development. In addition, the neural model could enable errors recovery from speech to text recognition. Index Terms: human-robot interaction, echo state network, online learning, iCub, language learning. References [1] H. Jaeger, "The "echo state" approach to analysing and training recurrent neural networks", Tech. Rep. GMD model has been developed with Oger toolbox: http://reservoir-computing.org/organic/engine. Figure 1: Communication between the speech recognition tool (that also controls the robotic platform) and the neural model

    Building Machines That Learn and Think Like People

    Get PDF
    Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar

    Symbol Emergence in Robotics: A Survey

    Full text link
    Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form a symbol system and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment and other systems. Understanding human social interactions and developing a robot that can smoothly communicate with human users in the long term, requires an understanding of the dynamics of symbol systems and is crucially important. The embodied cognition and social interaction of participants gradually change a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER is a constructive approach towards an emergent symbol system. The emergent symbol system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e., humans and developmental robots. Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and their embodied meanings from raw sensory--motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions of research in SER.Comment: submitted to Advanced Robotic

    What Can I Do Around Here? Deep Functional Scene Understanding for Cognitive Robots

    Full text link
    For robots that have the capability to interact with the physical environment through their end effectors, understanding the surrounding scenes is not merely a task of image classification or object recognition. To perform actual tasks, it is critical for the robot to have a functional understanding of the visual scene. Here, we address the problem of localizing and recognition of functional areas from an arbitrary indoor scene, formulated as a two-stage deep learning based detection pipeline. A new scene functionality testing-bed, which is complied from two publicly available indoor scene datasets, is used for evaluation. Our method is evaluated quantitatively on the new dataset, demonstrating the ability to perform efficient recognition of functional areas from arbitrary indoor scenes. We also demonstrate that our detection model can be generalized onto novel indoor scenes by cross validating it with the images from two different datasets
    corecore