Fusing contextual word embeddings for concreteness estimation

Abstract

Natural Language Processing (NLP) has a long history, and recent research has focused in particular on encoding meaning in a computable way. Word embeddings have been used for this specific purpose, allowing language tasks to be treated as mathematical problems. Real valued vectors have been generated or employed as word representations for several NLP tasks. In this work, different types of pre-trained word embeddings are fused together to estimate word concreteness. In the evaluation of this task, we have taken into account how much contextual information can affect final results, and also how to properly fuse different word embeddings in order to maximize their performance. The best architecture in our study surpasses the winning solution in the Evalita 2020 competition for the word concreteness task

    Similar works