Can domain adaptation be handled as analogies?

Abstract

Comunicació presentada a: 11th International Conference on Language Resources and Evaluation, celebrada a Miyazaki, Japó, del 7 al 12 de maig del 2018.Aspect identification in user generated texts by supervised text classification might suffer degradation in performance when changing to other domains than the one used for training. For referring to aspects such as quality, price or customer services the vocabulary might differ and affect performance. In this paper, we present an experiment to validate a method to handle domain shifts when there is no available labeled data to retrain. The system is based on the offset method as used for solving word analogy problems in vector semantic models such as word embedding. Despite of the fact that the offset method indeed found relevant analogues in the new domain for the classifier initial selected features, the classifiers did not deliver the expected results. The analysis showed that a number of words were found as analogues for many different initial features. This phenomenon was already described in the literature as 'default words' or 'hubs'. However, our data showed that it cannot be explained in terms of word frequency or distance to the question word, as suggested.This work was supported by the Spanish TUNER project TIN2015-65308-C5-5-R (MINECO/FEDER, UE)

    Similar works

    Full text

    thumbnail-image

    Available Versions