1,616 research outputs found
Implicit Discourse Relation Classification via Multi-Task Neural Networks
Without discourse connectives, classifying implicit discourse relations is a
challenging task and a bottleneck for building a practical discourse parser.
Previous research usually makes use of one kind of discourse framework such as
PDTB or RST to improve the classification performance on discourse relations.
Actually, under different discourse annotation frameworks, there exist multiple
corpora which have internal connections. To exploit the combination of
different discourse corpora, we design related discourse classification tasks
specific to a corpus, and propose a novel Convolutional Neural Network embedded
multi-task learning system to synthesize these tasks by learning both unique
and shared representations for each task. The experimental results on the PDTB
implicit discourse relation classification task demonstrate that our model
achieves significant gains over baseline systems.Comment: This is the pre-print version of a paper accepted by AAAI-1
When Are Tree Structures Necessary for Deep Learning of Representations?
Recursive neural models, which use syntactic parse trees to recursively
generate representations bottom-up, are a popular architecture. But there have
not been rigorous evaluations showing for exactly which tasks this syntax-based
method is appropriate. In this paper we benchmark {\bf recursive} neural models
against sequential {\bf recurrent} neural models (simple recurrent and LSTM
models), enforcing apples-to-apples comparison as much as possible. We
investigate 4 tasks: (1) sentiment classification at the sentence level and
phrase level; (2) matching questions to answer-phrases; (3) discourse parsing;
(4) semantic relation extraction (e.g., {\em component-whole} between nouns).
Our goal is to understand better when, and why, recursive models can
outperform simpler models. We find that recursive models help mainly on tasks
(like semantic relation extraction) that require associating headwords across a
long distance, particularly on very long sequences. We then introduce a method
for allowing recurrent models to achieve similar performance: breaking long
sentences into clause-like units at punctuation and processing them separately
before combining. Our results thus help understand the limitations of both
classes of models, and suggest directions for improving recurrent models
Parsing Argumentation Structures in Persuasive Essays
In this article, we present a novel approach for parsing argumentation
structures. We identify argument components using sequence labeling at the
token level and apply a new joint model for detecting argumentation structures.
The proposed model globally optimizes argument component types and
argumentative relations using integer linear programming. We show that our
model considerably improves the performance of base classifiers and
significantly outperforms challenging heuristic baselines. Moreover, we
introduce a novel corpus of persuasive essays annotated with argumentation
structures. We show that our annotation scheme and annotation guidelines
successfully guide human annotators to substantial agreement. This corpus and
the annotation guidelines are freely available for ensuring reproducibility and
to encourage future research in computational argumentation.Comment: Under review in Computational Linguistics. First submission: 26
October 2015. Revised submission: 15 July 201
Less is More: A Lightweight and Robust Neural Architecture for Discourse Parsing
Complex feature extractors are widely employed for text representation
building. However, these complex feature extractors make the NLP systems prone
to overfitting especially when the downstream training datasets are relatively
small, which is the case for several discourse parsing tasks. Thus, we propose
an alternative lightweight neural architecture that removes multiple complex
feature extractors and only utilizes learnable self-attention modules to
indirectly exploit pretrained neural language models, in order to maximally
preserve the generalizability of pre-trained language models. Experiments on
three common discourse parsing tasks show that powered by recent pretrained
language models, the lightweight architecture consisting of only two
self-attention layers obtains much better generalizability and robustness.
Meanwhile, it achieves comparable or even better system performance with fewer
learnable parameters and less processing time
Comparing Word Representations for Implicit Discourse Relation Classification
International audienceThis paper presents a detailed comparative framework for assessing the usefulness of unsupervised word representations for identifying so-called implicit discourse relations. Specifically, we compare standard one-hot word pair representations against low-dimensional ones based on Brown clusters and word embeddings. We also consider various word vector combination schemes for deriving discourse segment representations from word vectors, and compare representations based either on all words or limited to head words. Our main finding is that denser representations systematically outperform sparser ones and give state-of-the-art performance or above without the need for additional hand-crafted features
Basic tasks of sentiment analysis
Subjectivity detection is the task of identifying objective and subjective
sentences. Objective sentences are those which do not exhibit any sentiment.
So, it is desired for a sentiment analysis engine to find and separate the
objective sentences for further analysis, e.g., polarity detection. In
subjective sentences, opinions can often be expressed on one or multiple
topics. Aspect extraction is a subtask of sentiment analysis that consists in
identifying opinion targets in opinionated text, i.e., in detecting the
specific aspects of a product or service the opinion holder is either praising
or complaining about
- …