73,205 research outputs found
Adversarial Connective-exploiting Networks for Implicit Discourse Relation Classification
Implicit discourse relation classification is of great challenge due to the
lack of connectives as strong linguistic cues, which motivates the use of
annotated implicit connectives to improve the recognition. We propose a feature
imitation framework in which an implicit relation network is driven to learn
from another neural network with access to connectives, and thus encouraged to
extract similarly salient features for accurate classification. We develop an
adversarial model to enable an adaptive imitation scheme through competition
between the implicit network and a rival feature discriminator. Our method
effectively transfers discriminability of connectives to the implicit features,
and achieves state-of-the-art performance on the PDTB benchmark.Comment: To appear in ACL201
A Recurrent Neural Model with Attention for the Recognition of Chinese Implicit Discourse Relations
We introduce an attention-based Bi-LSTM for Chinese implicit discourse
relations and demonstrate that modeling argument pairs as a joint sequence can
outperform word order-agnostic approaches. Our model benefits from a partial
sampling scheme and is conceptually simple, yet achieves state-of-the-art
performance on the Chinese Discourse Treebank. We also visualize its attention
activity to illustrate the model's ability to selectively focus on the relevant
parts of an input sequence.Comment: To appear at ACL2017, code available at
https://github.com/sronnqvist/discourse-ablst
Dialogue Act Recognition via CRF-Attentive Structured Network
Dialogue Act Recognition (DAR) is a challenging problem in dialogue
interpretation, which aims to attach semantic labels to utterances and
characterize the speaker's intention. Currently, many existing approaches
formulate the DAR problem ranging from multi-classification to structured
prediction, which suffer from handcrafted feature extensions and attentive
contextual structural dependencies. In this paper, we consider the problem of
DAR from the viewpoint of extending richer Conditional Random Field (CRF)
structural dependencies without abandoning end-to-end training. We incorporate
hierarchical semantic inference with memory mechanism on the utterance
modeling. We then extend structured attention network to the linear-chain
conditional random field layer which takes into account both contextual
utterances and corresponding dialogue acts. The extensive experiments on two
major benchmark datasets Switchboard Dialogue Act (SWDA) and Meeting Recorder
Dialogue Act (MRDA) datasets show that our method achieves better performance
than other state-of-the-art solutions to the problem. It is a remarkable fact
that our method is nearly close to the human annotator's performance on SWDA
within 2% gap.Comment: 10 pages, 4figure
On the Importance of Word and Sentence Representation Learning in Implicit Discourse Relation Classification
Implicit discourse relation classification is one of the most difficult parts
in shallow discourse parsing as the relation prediction without explicit
connectives requires the language understanding at both the text span level and
the sentence level. Previous studies mainly focus on the interactions between
two arguments. We argue that a powerful contextualized representation module, a
bilateral multi-perspective matching module, and a global information fusion
module are all important to implicit discourse analysis. We propose a novel
model to combine these modules together. Extensive experiments show that our
proposed model outperforms BERT and other state-of-the-art systems on the PDTB
dataset by around 8% and CoNLL 2016 datasets around 16%. We also analyze the
effectiveness of different modules in the implicit discourse relation
classification task and demonstrate how different levels of representation
learning can affect the results.Comment: Accepted by IJCAI 202
- …