377 research outputs found
Less is More: A Lightweight and Robust Neural Architecture for Discourse Parsing
Complex feature extractors are widely employed for text representation
building. However, these complex feature extractors make the NLP systems prone
to overfitting especially when the downstream training datasets are relatively
small, which is the case for several discourse parsing tasks. Thus, we propose
an alternative lightweight neural architecture that removes multiple complex
feature extractors and only utilizes learnable self-attention modules to
indirectly exploit pretrained neural language models, in order to maximally
preserve the generalizability of pre-trained language models. Experiments on
three common discourse parsing tasks show that powered by recent pretrained
language models, the lightweight architecture consisting of only two
self-attention layers obtains much better generalizability and robustness.
Meanwhile, it achieves comparable or even better system performance with fewer
learnable parameters and less processing time
HILDA: A Discourse Parser Using Support Vector Machine Classification
Discourse structures have a central role in several computational tasks, such as question-answering or dialogue generation. In particular, the framework of the Rhetorical Structure Theory (RST) offers a sound formalism for hierarchical text organization. In this article, we present HILDA, an implemented discourse parser based on RST and Support Vector Machine (SVM) classification. SVM classifiers are trained and applied to discourse segmentation and relation labeling. By combining labeling with a greedy bottom-up tree building approach, we are able to create accurate discourse trees in linear time complexity. Importantly, our parser can parse entire texts, whereas the publicly available parser SPADE (Soricut and Marcu 2003) is limited to sentence level analysis. HILDA outperforms other discourse parsers for tree structure construction and discourse relation labeling. For the discourse parsing task, our system reaches 78.3% of the performance level of human annotators. Compared to a state-of-the-art rule-based discourse parser, our system achieves a performance increase of 11.6%
- …