Abstract meaning representation (AMR) highlights the core semantic
information of text in a graph structure. Recently, pre-trained language models
(PLMs) have advanced tasks of AMR parsing and AMR-to-text generation,
respectively. However, PLMs are typically pre-trained on textual data, thus are
sub-optimal for modeling structural knowledge. To this end, we investigate
graph self-supervised training to improve the structure awareness of PLMs over
AMR graphs. In particular, we introduce two graph auto-encoding strategies for
graph-to-graph pre-training and four tasks to integrate text and graph
information during pre-training. We further design a unified framework to
bridge the gap between pre-training and fine-tuning tasks. Experiments on both
AMR parsing and AMR-to-text generation show the superiority of our model. To
our knowledge, we are the first to consider pre-training on semantic graphs.Comment: ACL2022 camera-ready final versio