76 research outputs found
Investigating Linguistic Pattern Ordering in Hierarchical Natural Language Generation
Natural language generation (NLG) is a critical component in spoken dialogue
system, which can be divided into two phases: (1) sentence planning: deciding
the overall sentence structure, (2) surface realization: determining specific
word forms and flattening the sentence structure into a string. With the rise
of deep learning, most modern NLG models are based on a sequence-to-sequence
(seq2seq) model, which basically contains an encoder-decoder structure; these
NLG models generate sentences from scratch by jointly optimizing sentence
planning and surface realization. However, such simple encoder-decoder
architecture usually fail to generate complex and long sentences, because the
decoder has difficulty learning all grammar and diction knowledge well. This
paper introduces an NLG model with a hierarchical attentional decoder, where
the hierarchy focuses on leveraging linguistic knowledge in a specific order.
The experiments show that the proposed method significantly outperforms the
traditional seq2seq model with a smaller model size, and the design of the
hierarchical attentional decoder can be applied to various NLG systems.
Furthermore, different generation strategies based on linguistic patterns are
investigated and analyzed in order to guide future NLG research work.Comment: accepted by the 7th IEEE Workshop on Spoken Language Technology (SLT
2018). arXiv admin note: text overlap with arXiv:1808.0274
Learning Multi-Level Information for Dialogue Response Selection by Highway Recurrent Transformer
With the increasing research interest in dialogue response generation, there
is an emerging branch formulating this task as selecting next sentences, where
given the partial dialogue contexts, the goal is to determine the most probable
next sentence. Following the recent success of the Transformer model, this
paper proposes (1) a new variant of attention mechanism based on multi-head
attention, called highway attention, and (2) a recurrent model based on
transformer and the proposed highway attention, so-called Highway Recurrent
Transformer. Experiments on the response selection task in the seventh Dialog
System Technology Challenge (DSTC7) show the capability of the proposed model
of modeling both utterance-level and dialogue-level information; the
effectiveness of each module is further analyzed as well
- …