Translation-based AMR parsers have recently gained popularity due to their
simplicity and effectiveness. They predict linearized graphs as free texts,
avoiding explicit structure modeling. However, this simplicity neglects
structural locality in AMR graphs and introduces unnecessary tokens to
represent coreferences. In this paper, we introduce new target forms of AMR
parsing and a novel model, CHAP, which is equipped with causal hierarchical
attention and the pointer mechanism, enabling the integration of structures
into the Transformer decoder. We empirically explore various alternative
modeling options. Experiments show that our model outperforms baseline models
on four out of five benchmarks in the setting of no additional data.Comment: EMNLP 202