research

Towards a continuous modeling of natural language domains

Abstract

Humans continuously adapt their style and language to a variety of domains. However, a reliable definition of `domain' has eluded researchers thus far. Additionally, the notion of discrete domains stands in contrast to the multiplicity of heterogeneous domains that humans navigate, many of which overlap. In order to better understand the change and variation of human language, we draw on research in domain adaptation and extend the notion of discrete domains to the continuous spectrum. We propose representation learning-based models that can adapt to continuous domains and detail how these can be used to investigate variation in language. To this end, we propose to use dialogue modeling as a test bed due to its proximity to language modeling and its social component.Comment: 5 pages, 3 figures, published in Uphill Battles in Language Processing workshop, EMNLP 201

    Similar works

    Full text

    thumbnail-image

    Available Versions