2 research outputs found
Understanding and generating language with abstract meaning representation
Abstract Meaning Representation (AMR) is a semantic representation for natural
language that encompasses annotations related to traditional tasks such as
Named Entity Recognition (NER), Semantic Role Labeling (SRL), word sense
disambiguation (WSD), and Coreference Resolution. AMR represents sentences
as graphs, where nodes represent concepts and edges represent semantic
relations between them.
Sentences are represented as graphs and not trees because nodes can have
multiple incoming edges, called reentrancies. This thesis investigates the impact
of reentrancies for parsing (from text to AMR) and generation (from AMR
to text). For the parsing task, we showed that it is possible to use techniques
from tree parsing and adapt them to deal with reentrancies. To better analyze
the quality of AMR parsers, we developed a set of fine-grained metrics
and found that state-of-the-art parsers predict reentrancies poorly. Hence we
provided a classification of linguistic phenomena causing reentrancies, categorized
the type of errors parsers do with respect to reentrancies, and proved
that correcting these errors can lead to significant improvements. For the generation
task, we showed that neural encoders that have access to reentrancies
outperform those who do not, demonstrating the importance of reentrancies
also for generation.
This thesis also discusses the problem of using AMR for languages other
than English. Annotating new AMR datasets for other languages is an expensive
process and requires defining annotation guidelines for each new language.
It is therefore reasonable to ask whether we can share AMR annotations
across languages. We provided evidence that AMR datasets for English
can be successfully transferred to other languages: we trained parsers for Italian,
Spanish, German, and Chinese to investigate the cross-linguality of AMR.
We showed cases where translational divergences between languages pose a
problem and cases where they do not. In summary, this thesis demonstrates
the impact of reentrancies in AMR as well as providing insights on AMR for
languages that do not yet have AMR datasets