Abstract Meaning Representation (AMR) is a Semantic Parsing formalism that
aims at providing a semantic graph abstraction representing a given text.
Current approaches are based on autoregressive language models such as BART or
T5, fine-tuned through Teacher Forcing to obtain a linearized version of the
AMR graph from a sentence. In this paper, we present LeakDistill, a model and
method that explores a modification to the Transformer architecture, using
structural adapters to explicitly incorporate graph information into the
learned representations and improve AMR parsing performance. Our experiments
show how, by employing word-to-node alignment to embed graph structural
information into the encoder at training time, we can obtain state-of-the-art
AMR parsing through self-knowledge distillation, even without the use of
additional data. We release the code at
\url{http://www.github.com/sapienzanlp/LeakDistill}.Comment: ACL 2023. Please cite authors correctly using both lastnames
("Mart\'inez Lorenzo", "Huguet Cabot"