Word embeddings have made enormous inroads in recent years in a wide variety
of text mining applications. In this paper, we explore a word embedding-based
architecture for predicting the relevance of a role between two financial
entities within the context of natural language sentences. In this extended
abstract, we propose a pooled approach that uses a collection of sentences to
train word embeddings using the skip-gram word2vec architecture. We use the
word embeddings to obtain context vectors that are assigned one or more labels
based on manual annotations. We train a machine learning classifier using the
labeled context vectors, and use the trained classifier to predict contextual
role relevance on test data. Our approach serves as a good minimal-expertise
baseline for the task as it is simple and intuitive, uses open-source modules,
requires little feature crafting effort and performs well across roles.Comment: DSMM 2017 workshop at ACM SIGMOD conferenc