1 research outputs found
Joint Learning of Word and Label Embeddings for Sequence Labelling in Spoken Language Understanding
We propose an architecture to jointly learn word and label embeddings for
slot filling in spoken language understanding. The proposed approach encodes
labels using a combination of word embeddings and straightforward word-label
association from the training data. Compared to the state-of-the-art methods,
our approach does not require label embeddings as part of the input and
therefore lends itself nicely to a wide range of model architectures. In
addition, our architecture computes contextual distances between words and
labels to avoid adding contextual windows, thus reducing memory footprint. We
validate the approach on established spoken dialogue datasets and show that it
can achieve state-of-the-art performance with much fewer trainable parameters.Comment: Accepted for publication at ASRU 201