6 research outputs found

    Referential Uncertainty and Word Learning in High-dimensional, Continuous Meaning Spaces

    Get PDF
    This paper discusses lexicon word learning in high-dimensional meaning spaces from the viewpoint of referential uncertainty. We investigate various state-of-the-art Machine Learning algorithms and discuss the impact of scaling, representation and meaning space structure. We demonstrate that current Machine Learning techniques successfully deal with high-dimensional meaning spaces. In particular, we show that exponentially increasing dimensions linearly impact learner performance and that referential uncertainty from word sensitivity has no impact.Comment: Published as Spranger, M. and Beuls, K. (2016). Referential uncertainty and word learning in high-dimensional, continuous meaning spaces. In Hafner, V. and Pitti, A., editors, Development and Learning and Epigenetic Robotics (ICDL-Epirob), 2016 Joint IEEE International Conferences on, 2016. IEE

    Computational Models of Tutor Feedback in Language Acquisition

    Full text link
    This paper investigates the role of tutor feedback in language learning using computational models. We compare two dominant paradigms in language learning: interactive learning and cross-situational learning - which differ primarily in the role of social feedback such as gaze or pointing. We analyze the relationship between these two paradigms and propose a new mixed paradigm that combines the two paradigms and allows to test algorithms in experiments that combine no feedback and social feedback. To deal with mixed feedback experiments, we develop new algorithms and show how they perform with respect to traditional knn and prototype approaches.Comment: 6 pages, 8 figures, Seventh Joint IEEE International Conference on Development and Learning and on Epigenetic Robotic

    Re-conceptualising the Language Game Paradigm in the Framework of Multi-Agent Reinforcement Learning

    Get PDF
    In this paper, we formulate the challenge of re-conceptualising the language game experimental paradigm in the framework of multi-agent reinforcement learning (MARL). If successful, future language game experiments will benefit from the rapid and promising methodological advances in the MARL community, while future MARL experiments on learning emergent communication will benefit from the insights and results gained from language game experiments. We strongly believe that this cross-pollination has the potential to lead to major breakthroughs in the modelling of how human-like languages can emerge and evolve in multi-agent systems.Comment: This paper was accepted for presentation at the 2020 AAAI Spring Symposium `Challenges and Opportunities for Multi-Agent Reinforcement Learning' after a double-blind reviewing proces

    Learning to Parse Grounded Language using Reservoir Computing

    Get PDF
    International audienceRecently new models for language processing and learning using Reservoir Computing have been popular. However, these models are typically not grounded in sensorimotor systems and robots. In this paper, we develop a model of Reservoir Computing called Reservoir Parser (ResPars) for learning to parse Natural Language from grounded data coming from humanoid robots. Previous work showed that ResPars is able to do syntactic generalization over different sentences (surface structure) with the same meaning (deep structure). We argue that such ability is key to guide linguistic generalization in a grounded architecture. We show that ResPars is able to generalize on grounded compositional semantics by combining it with Incremental Recruitment Language (IRL). Additionally, we show that ResPars is able to learn to generalize on the same sentences, but not processed word by word, but as an unsegmented sequence of phonemes. This ability enables the architecture to not rely only on the words recognized by a speech recognizer, but to process the sub-word level directly. We additionally test the model's robustness to word error recognition
    corecore