3 research outputs found

    Relational Grounded Language Learning

    No full text
    International audienceIn the past, research on learning language models mainly used syntactic information during the learning process but in recent years, researchers began to also use semantic information. This paper presents such an approach where the input of our learning algorithm is a dataset of pairs made up of sentences and the contexts in which they are produced. The system we present is based on inductive logic programming techniques that aim to learn a mapping between n-grams and a semantic representation of their associated meaning. Experiments have shown that we can learn such a mapping that made it possible later to generate relevant descriptions of images or learn the meaning of words without any linguistic resource

    Relational grounded language learning

    No full text
    Language learning has been studied for decades. For a long time, the focus was on learning the grammatical structure of a language from sentences, or learning the semantics of sentences from examples of sentence/meaning pairs. More recently, there has been increasing interest in grounded language learning, where the language is learned by observing sentences used in a particular context, and trying to link elements of these sentences to elements of the context. This talk is about an approach called relational grounded language learning. In this approach, the semantics of a sentence is a relational structure, and this structure is learned from sentence/context pairs in which the context is represented in a relational format. Once a model of the link between sentences and semantic structures is in place, it can be used for a variety of purposes: generating sentences describing a given scene, identifying the elements in a scene that a sentence refers to, translating a sentence from one language to another through its semantic representation, and more. The potential of this approach for all these uses has been demonstrated on some simple problems. Although the approach is clearly still in its infancy, we believe it has much potential in terms of helping us understand how humans learn their first language, as well as improving natural language processing technology.Keynote speechstatus: publishe

    Relational grounded language learning

    No full text
    In the past, research on learning language models mainly used syntactic information during the learning process but in recent years, researchers began to also use semantic information. This paper presents such an approach where the input of our learning algorithm is a dataset of pairs made up of sentences and the contexts in which they are produced. The system we present is based on inductive logic programming techniques that aim to learn a mapping between n-grams and a semantic representation of their associated meaning. Experiments have shown that we can learn such a mapping that made it possible later to generate relevant descriptions of images or learn the meaning of words without any linguistic resource.status: publishe
    corecore