4 research outputs found
AWTE-BERT:Attending to Wordpiece Tokenization Explicitly on BERT for Joint Intent Classification and SlotFilling
Intent classification and slot filling are two core tasks in natural language
understanding (NLU). The interaction nature of the two tasks makes the joint
models often outperform the single designs. One of the promising solutions,
called BERT (Bidirectional Encoder Representations from Transformers), achieves
the joint optimization of the two tasks. BERT adopts the wordpiece to tokenize
each input token into multiple sub-tokens, which causes a mismatch between the
tokens and the labels lengths. Previous methods utilize the hidden states
corresponding to the first sub-token as input to the classifier, which limits
performance improvement since some hidden semantic informations is discarded in
the fine-tune process. To address this issue, we propose a novel joint model
based on BERT, which explicitly models the multiple sub-tokens features after
wordpiece tokenization, thereby generating the context features that contribute
to slot filling. Specifically, we encode the hidden states corresponding to
multiple sub-tokens into a context vector via the attention mechanism. Then, we
feed each context vector into the slot filling encoder, which preserves the
integrity of the sentence. Experimental results demonstrate that our proposed
model achieves significant improvement on intent classification accuracy, slot
filling F1, and sentence-level semantic frame accuracy on two public benchmark
datasets. The F1 score of the slot filling in particular has been improved from
96.1 to 98.2 (2.1% absolute) on the ATIS dataset
Graph LSTM with Context-Gated Mechanism for Spoken Language Understanding
Much research in recent years has focused on spoken language understanding (SLU), which usually involves two tasks: intent detection and slot filling. Since Yao et al.(2013), almost all SLU systems are RNN-based, which have been shown to suffer various limitations due to their sequential nature. In this paper, we propose to tackle this task with Graph LSTM, which first converts text into a graph and then utilizes the message passing mechanism to learn the node representation. Not only the Graph LSTM addresses the limitations of sequential models, but it can also help to utilize the semantic correlation between slot and intent. We further propose a context-gated mechanism to make better use of context information for slot filling. Our extensive evaluation shows that the proposed model outperforms the state-of-the-art results by a large margin