834 research outputs found
Slide, Constrain, Parse, Repeat: Synchronous SlidingWindows for Document AMR Parsing
The sliding window approach provides an elegant way to handle contexts of
sizes larger than the Transformer's input window, for tasks like language
modeling. Here we extend this approach to the sequence-to-sequence task of
document parsing. For this, we exploit recent progress in transition-based
parsing to implement a parser with synchronous sliding windows over source and
target. We develop an oracle and a parser for document-level AMR by expanding
on Structured-BART such that it leverages source-target alignments and
constrains decoding to guarantee synchronicity and consistency across
overlapping windows. We evaluate our oracle and parser using the Abstract
Meaning Representation (AMR) parsing 3.0 corpus. On the Multi-Sentence
development set of AMR 3.0, we show that our transition oracle loses only 8\%
of the gold cross-sentential links despite using a sliding window. In practice,
this approach also results in a high-quality document-level parser with
manageable memory requirements. Our proposed system performs on par with the
state-of-the-art pipeline approach for document-level AMR parsing task on
Multi-Sentence AMR 3.0 corpus while maintaining sentence-level parsing
performance
AMR Parsing with Instruction Fine-tuned Pre-trained Language Models
Instruction fine-tuned language models on a collection of instruction
annotated datasets (FLAN) have shown highly effective to improve model
performance and generalization to unseen tasks. However, a majority of standard
parsing tasks including abstract meaning representation (AMR), universal
dependency (UD), semantic role labeling (SRL) has been excluded from the FLAN
collections for both model training and evaluations. In this paper, we take one
of such instruction fine-tuned pre-trained language models, i.e. FLAN-T5, and
fine-tune them for AMR parsing. Our extensive experiments on various AMR
parsing tasks including AMR2.0, AMR3.0 and BioAMR indicate that FLAN-T5
fine-tuned models out-perform previous state-of-the-art models across all
tasks. In addition, full fine-tuning followed by the parameter efficient
fine-tuning, LoRA, further improves the model performances, setting new
state-of-the-arts in Smatch on AMR2.0 (86.4), AMR3.0 (84.9) and BioAMR (82.3)
- …