1 research outputs found
Comparison by Conversion: Reverse-Engineering UCCA from Syntax and Lexical Semantics
Building robust natural language understanding systems will require a clear
characterization of whether and how various linguistic meaning representations
complement each other. To perform a systematic comparative analysis, we
evaluate the mapping between meaning representations from different frameworks
using two complementary methods: (i) a rule-based converter, and (ii) a
supervised delexicalized parser that parses to one framework using only
information from the other as features. We apply these methods to convert the
STREUSLE corpus (with syntactic and lexical semantic annotations) to UCCA (a
graph-structured full-sentence meaning representation). Both methods yield
surprisingly accurate target representations, close to fully supervised UCCA
parser quality---indicating that UCCA annotations are partially redundant with
STREUSLE annotations. Despite this substantial convergence between frameworks,
we find several important areas of divergence.Comment: COLING 2020 camera read