5,490 research outputs found
Is Supervised Syntactic Parsing Beneficial for Language Understanding? An Empirical Investigation
Traditional NLP has long held (supervised) syntactic parsing necessary for
successful higher-level language understanding. The recent advent of end-to-end
neural language learning, self-supervised via language modeling (LM), and its
success on a wide range of language understanding tasks, however, questions
this belief. In this work, we empirically investigate the usefulness of
supervised parsing for semantic language understanding in the context of
LM-pretrained transformer networks. Relying on the established fine-tuning
paradigm, we first couple a pretrained transformer with a biaffine parsing
head, aiming to infuse explicit syntactic knowledge from Universal Dependencies
(UD) treebanks into the transformer. We then fine-tune the model for language
understanding (LU) tasks and measure the effect of the intermediate parsing
training (IPT) on downstream LU performance. Results from both monolingual
English and zero-shot language transfer experiments (with intermediate
target-language parsing) show that explicit formalized syntax, injected into
transformers through intermediate supervised parsing, has very limited and
inconsistent effect on downstream LU performance. Our results, coupled with our
analysis of transformers' representation spaces before and after intermediate
parsing, make a significant step towards providing answers to an essential
question: how (un)availing is supervised parsing for high-level semantic
language understanding in the era of large neural models
- …