167 research outputs found
Hierarchical Contextualized Representation for Named Entity Recognition
Named entity recognition (NER) models are typically based on the architecture
of Bi-directional LSTM (BiLSTM). The constraints of sequential nature and the
modeling of single input prevent the full utilization of global information
from larger scope, not only in the entire sentence, but also in the entire
document (dataset). In this paper, we address these two deficiencies and
propose a model augmented with hierarchical contextualized representation:
sentence-level representation and document-level representation. In
sentence-level, we take different contributions of words in a single sentence
into consideration to enhance the sentence representation learned from an
independent BiLSTM via label embedding attention mechanism. In document-level,
the key-value memory network is adopted to record the document-aware
information for each unique word which is sensitive to similarity of context
information. Our two-level hierarchical contextualized representations are
fused with each input token embedding and corresponding hidden state of BiLSTM,
respectively. The experimental results on three benchmark NER datasets
(CoNLL-2003 and Ontonotes 5.0 English datasets, CoNLL-2002 Spanish dataset)
show that we establish new state-of-the-art results.Comment: Accepted by AAAI 202
Transformer-based multi-hop question generation
Question generation is the parallel task of question answering, where given an input context and, optionally, an answer, the goal is to generate a relevant and fluent natural language question. Although recent works on question generation have experienced success by utilizing sequence-to-sequence models, there is a need for question generation models to handle increasingly complex input contexts to produce increasingly detailed questions. Multi-hop question generation is a more challenging task that aims to generate questions by connecting multiple facts from multiple input contexts. In this work, we apply a transformer model to the task of multi-hop question generation without utilizing any sentence-level supporting fact information. We utilize concepts that have proven effective in single-hop question generation, including a copy mechanism and placeholder tokens. We evaluate our model’s performance on the HotpotQA dataset using automated evaluation metrics, including BLEU, ROUGE and METEOR and show an improvement over the previous work
- …