12 research outputs found
A Study on Dialog Act Recognition using Character-Level Tokenization
Dialog act recognition is an important step for dialog systems since it
reveals the intention behind the uttered words. Most approaches on the task use
word-level tokenization. In contrast, this paper explores the use of
character-level tokenization. This is relevant since there is information at
the sub-word level that is related to the function of the words and, thus,
their intention. We also explore the use of different context windows around
each token, which are able to capture important elements, such as affixes.
Furthermore, we assess the importance of punctuation and capitalization. We
performed experiments on both the Switchboard Dialog Act Corpus and the DIHANA
Corpus. In both cases, the experiments not only show that character-level
tokenization leads to better performance than the typical word-level
approaches, but also that both approaches are able to capture complementary
information. Thus, the best results are achieved by combining tokenization at
both levels.Comment: 11 pages, 2 figures, 4 tables, AIMSA 201
Multiresolution Recurrent Neural Networks: An Application to Dialogue Response Generation
We introduce the multiresolution recurrent neural network, which extends the
sequence-to-sequence framework to model natural language generation as two
parallel discrete stochastic processes: a sequence of high-level coarse tokens,
and a sequence of natural language tokens. There are many ways to estimate or
learn the high-level coarse tokens, but we argue that a simple extraction
procedure is sufficient to capture a wealth of high-level discourse semantics.
Such procedure allows training the multiresolution recurrent neural network by
maximizing the exact joint log-likelihood over both sequences. In contrast to
the standard log- likelihood objective w.r.t. natural language tokens (word
perplexity), optimizing the joint log-likelihood biases the model towards
modeling high-level abstractions. We apply the proposed model to the task of
dialogue response generation in two challenging domains: the Ubuntu technical
support domain, and Twitter conversations. On Ubuntu, the model outperforms
competing approaches by a substantial margin, achieving state-of-the-art
results according to both automatic evaluation metrics and a human evaluation
study. On Twitter, the model appears to generate more relevant and on-topic
responses according to automatic evaluation metrics. Finally, our experiments
demonstrate that the proposed model is more adept at overcoming the sparsity of
natural language and is better able to capture long-term structure.Comment: 21 pages, 2 figures, 10 table