3 research outputs found

    Semantic Role Labelling for Robot Instructions using Echo State Networks

    Get PDF
    International audienceTo control a robot in a real-world robot scenario, a real-time parser is needed to create semantic representations from natural language which can be interpreted. The parser should be able to create the hierarchical tree-like representations without consulting external systems to show its learning capabilities. We propose an efficient Echo State Network-based parser for robotic commands and only relies on the training data. The system generates a single semantic tree structure in real-time which can be executed by a robot arm manipulating objects. Four of six other approaches, which in most cases generate multiple trees and select one of them as the solution, were outperformed with 64.2% tree accuracy on difficult unseen natural language (74.1% under best conditions) on the same dataset

    A Journey in ESN and LSTM Visualisations on a Language Task

    Get PDF
    Echo States Networks (ESN) and Long-Short Term Memory networks (LSTM) are two popular architectures of Recurrent Neural Networks (RNN) to solve machine learning task involving sequential data. However, little have been done to compare their performances and their internal mechanisms on a common task. In this work, we trained ESNs and LSTMs on a Cross-Situationnal Learning (CSL) task. This task aims at modelling how infants learn language: they create associations between words and visual stimuli in order to extract meaning from words and sentences. The results are of three kinds: performance comparison, internal dynamics analyses and visualization of latent space. (1) We found that both models were able to successfully learn the task: the LSTM reached the lowest error for the basic corpus, but the ESN was quicker to train. Furthermore, the ESN was able to outperform LSTMs on datasets more challenging without any further tuning needed. (2) We also conducted an analysis of the internal units activations of LSTMs and ESNs. Despite the deep differences between both models (trained or fixed internal weights), we were able to uncover similar inner mechanisms: both put emphasis on the units encoding aspects of the sentence structure. (3) Moreover, we present Recurrent States Space Visualisations (RSSviz), a method to visualize the structure of latent state space of RNNs, based on dimension reduction (using UMAP). This technique enables us to observe a fractal embedding of sequences in the LSTM. RSSviz is also useful for the analysis of ESNs (i) to spot difficult examples and (ii) to generate animated plots showing the evolution of activations across learning stages. Finally, we explore qualitatively how the RSSviz could provide an intuitive visualisation to understand the influence of hyperparameters on the reservoir dynamics prior to ESN training
    corecore