5 research outputs found
Exploiting Deep Semantics and Compositionality of Natural Language for Human-Robot-Interaction
We develop a natural language interface for human robot interaction that
implements reasoning about deep semantics in natural language. To realize the
required deep analysis, we employ methods from cognitive linguistics, namely
the modular and compositional framework of Embodied Construction Grammar (ECG)
[Feldman, 2009]. Using ECG, robots are able to solve fine-grained reference
resolution problems and other issues related to deep semantics and
compositionality of natural language. This also includes verbal interaction
with humans to clarify commands and queries that are too ambiguous to be
executed safely. We implement our NLU framework as a ROS package and present
proof-of-concept scenarios with different robots, as well as a survey on the
state of the art
Learning to Parse Grounded Language using Reservoir Computing
International audienceRecently new models for language processing and learning using Reservoir Computing have been popular. However, these models are typically not grounded in sensorimotor systems and robots. In this paper, we develop a model of Reservoir Computing called Reservoir Parser (ResPars) for learning to parse Natural Language from grounded data coming from humanoid robots. Previous work showed that ResPars is able to do syntactic generalization over different sentences (surface structure) with the same meaning (deep structure). We argue that such ability is key to guide linguistic generalization in a grounded architecture. We show that ResPars is able to generalize on grounded compositional semantics by combining it with Incremental Recruitment Language (IRL). Additionally, we show that ResPars is able to learn to generalize on the same sentences, but not processed word by word, but as an unsegmented sequence of phonemes. This ability enables the architecture to not rely only on the words recognized by a speech recognizer, but to process the sub-word level directly. We additionally test the model's robustness to word error recognition
Co-Acquisition of Syntax and Semantics - An Investigation in Spatial Language
Trabajo presentado en la Twenty-Fourth International Joint Conference on Artificial Intelligence, celebrada en Buenos Aires del 25 al 31 de julio de 2015.This paper reports recent progress on modeling the
grounded co-acquisition of syntax and semantics of
locative spatial language in developmental robots.
We show how a learner robot can learn to produce
and interpret spatial utterances in guided-learning
interactions with a tutor robot (equipped with a system for producing English spatial phrases). The
tutor guides the learning process by simplifying
the challenges and complexity of utterances, gives
feedback, and gradually increases the complexity
of the language to be learnt. Our experiments show
promising results towards long-term, incremental
acquisition of natural language in a process of co-development of syntax and semantics.Peer reviewe