3 research outputs found
Syntactically Look-Ahead Attention Network for Sentence Compression
Sentence compression is the task of compressing a long sentence into a short
one by deleting redundant words. In sequence-to-sequence (Seq2Seq) based
models, the decoder unidirectionally decides to retain or delete words. Thus,
it cannot usually explicitly capture the relationships between decoded words
and unseen words that will be decoded in the future time steps. Therefore, to
avoid generating ungrammatical sentences, the decoder sometimes drops important
words in compressing sentences. To solve this problem, we propose a novel
Seq2Seq model, syntactically look-ahead attention network (SLAHAN), that can
generate informative summaries by explicitly tracking both dependency parent
and child words during decoding and capturing important words that will be
decoded in the future. The results of the automatic evaluation on the Google
sentence compression dataset showed that SLAHAN achieved the best
kept-token-based-F1, ROUGE-1, ROUGE-2 and ROUGE-L scores of 85.5, 79.3, 71.3
and 79.1, respectively. SLAHAN also improved the summarization performance on
longer sentences. Furthermore, in the human evaluation, SLAHAN improved
informativeness without losing readability.Comment: AAAI 202