11 research outputs found

    A Transition-Based Directed Acyclic Graph Parser for UCCA

    Full text link
    We present the first parser for UCCA, a cross-linguistically applicable framework for semantic representation, which builds on extensive typological work and supports rapid annotation. UCCA poses a challenge for existing parsing techniques, as it exhibits reentrancy (resulting in DAG structures), discontinuous structures and non-terminal nodes corresponding to complex semantic units. To our knowledge, the conjunction of these formal properties is not supported by any existing parser. Our transition-based parser, which uses a novel transition set and features based on bidirectional LSTMs, has value not just for UCCA parsing: its ability to handle more general graph structures can inform the development of parsers for other semantic DAG structures, and in languages that frequently use discontinuous structures.Comment: 16 pages; Accepted as long paper at ACL201

    High-level Understanding of Visual Content in Learning Materials through Graph Neural Networks

    Get PDF

    Probabilistic graph formalisms for meaning representations

    Get PDF
    In recent years, many datasets have become available that represent natural language semantics as graphs. To use these datasets in natural language processing (NLP), we require probabilistic models of graphs. Finite-state models have been very successful for NLP tasks on strings and trees because they are probabilistic and composable. Are there equivalent models for graphs? In this thesis, we survey several graph formalisms, focusing on whether they are probabilistic and composable, and we contribute several new results. In particular, we study the directed acyclic graph automata languages (DAGAL), the monadic second-order graph languages (MSOGL), and the hyperedge replacement languages (HRL). We prove that DAGAL cannot be made probabilistic, we explain why MSOGL also most likely cannot be made probabilistic, and we review the fact that HRL are not composable. We then review a subfamily of HRL and MSOGL: the regular graph languages (RGL; Courcelle 1991), which have not been widely studied, and particularly have not been studied in an NLP context. Although Courcelle (1991) only sketches a proof, we present a full, more NLP-accessible proof that RGL are a subfamily of MSOGL. We prove that RGL are probabilistic and composable, and we provide a novel Earley-style parsing algorithm for them that runs in time linear in the size of the input graph. We compare RGL to two other new formalisms: the restricted DAG languages (RDL; Bj¨orklund et al. 2016) and the tree-like languages (TLL; Matheja et al. 2015). We show that RGL and RDL are incomparable; TLL and RDL are incomparable; and either RGL are incomparable to TLL, or RGL are contained within TLL. This thesis provides a clearer picture of this field from an NLP perspective, and suggests new theoretical and empirical research directions

    Towards a Catalogue of Linguistic Graph Banks

    No full text
    Graphs exceeding the formal complexity of rooted trees are of growing relevance to much NLP research. Although formally well understood in graph theory, there is substantial variation in the types of linguistic graphs, as well as in the interpretation of various structural properties. To provide a common terminology and transparent statistics across different collections of graphs in NLP, we propose to establish a shared community resource with an open-source reference implementation for common statistics
    corecore