Abstract Meaning Representation (AMR) parsing aims to extract an abstract
semantic graph from a given sentence. The sequence-to-sequence approaches,
which linearize the semantic graph into a sequence of nodes and edges and
generate the linearized graph directly, have achieved good performance.
However, we observed that these approaches suffer from structure loss
accumulation during the decoding process, leading to a much lower F1-score for
nodes and edges decoded later compared to those decoded earlier. To address
this issue, we propose a novel Reverse Graph Linearization (RGL) enhanced
framework. RGL defines both default and reverse linearization orders of an AMR
graph, where most structures at the back part of the default order appear at
the front part of the reversed order and vice versa. RGL incorporates the
reversed linearization to the original AMR parser through a two-pass
self-distillation mechanism, which guides the model when generating the default
linearizations. Our analysis shows that our proposed method significantly
mitigates the problem of structure loss accumulation, outperforming the
previously best AMR parsing model by 0.8 and 0.5 Smatch scores on the AMR 2.0
and AMR 3.0 dataset, respectively. The code are available at
https://github.com/pkunlp-icler/AMR_reverse_graph_linearization.Comment: Findings of EMNLP202