11 research outputs found
A Transition-Based Directed Acyclic Graph Parser for UCCA
We present the first parser for UCCA, a cross-linguistically applicable
framework for semantic representation, which builds on extensive typological
work and supports rapid annotation. UCCA poses a challenge for existing parsing
techniques, as it exhibits reentrancy (resulting in DAG structures),
discontinuous structures and non-terminal nodes corresponding to complex
semantic units. To our knowledge, the conjunction of these formal properties is
not supported by any existing parser. Our transition-based parser, which uses a
novel transition set and features based on bidirectional LSTMs, has value not
just for UCCA parsing: its ability to handle more general graph structures can
inform the development of parsers for other semantic DAG structures, and in
languages that frequently use discontinuous structures.Comment: 16 pages; Accepted as long paper at ACL201
Probabilistic graph formalisms for meaning representations
In recent years, many datasets have become available that represent natural language
semantics as graphs. To use these datasets in natural language processing (NLP), we
require probabilistic models of graphs. Finite-state models have been very successful
for NLP tasks on strings and trees because they are probabilistic and composable. Are
there equivalent models for graphs? In this thesis, we survey several graph formalisms,
focusing on whether they are probabilistic and composable, and we contribute several
new results. In particular, we study the directed acyclic graph automata languages
(DAGAL), the monadic second-order graph languages (MSOGL), and the hyperedge
replacement languages (HRL). We prove that DAGAL cannot be made probabilistic,
we explain why MSOGL also most likely cannot be made probabilistic, and we review
the fact that HRL are not composable. We then review a subfamily of HRL and
MSOGL: the regular graph languages (RGL; Courcelle 1991), which have not been
widely studied, and particularly have not been studied in an NLP context. Although
Courcelle (1991) only sketches a proof, we present a full, more NLP-accessible proof
that RGL are a subfamily of MSOGL. We prove that RGL are probabilistic and composable,
and we provide a novel Earley-style parsing algorithm for them that runs in
time linear in the size of the input graph. We compare RGL to two other new formalisms:
the restricted DAG languages (RDL; Bj¨orklund et al. 2016) and the tree-like
languages (TLL; Matheja et al. 2015). We show that RGL and RDL are incomparable;
TLL and RDL are incomparable; and either RGL are incomparable to TLL, or RGL
are contained within TLL. This thesis provides a clearer picture of this field from an
NLP perspective, and suggests new theoretical and empirical research directions
Recommended from our members
Functional Distributional Semantics: Learning Linguistically Informed Representations from a Precisely Annotated Corpus
The aim of distributional semantics is to design computational techniques that can automatically learn the meanings of words from a body of text. The twin challenges are: how do we represent meaning, and how do we learn these representations? The current state of the art is to represent meanings as vectors – but vectors do not correspond to any traditional notion of meaning. In particular, there is no way to talk about truth, a crucial concept in logic and formal semantics.
In this thesis, I develop a framework for distributional semantics which answers this challenge. The meaning of a word is not represented as a vector, but as a function, mapping entities (objects in the world) to probabilities of truth (the probability that the word is true of the entity). Such a function can be interpreted both in the machine learning sense of a classifier, and in the formal semantic sense of a truth-conditional function. This simultaneously allows both the use of machine learning techniques to exploit large datasets, and also the use of formal semantic techniques to manipulate the learnt representations. I define a probabilistic graphical model, which incorporates a probabilistic generalisation of model theory (allowing a strong connection with formal semantics), and which generates semantic dependency graphs (allowing it to be trained on a corpus). This graphical model provides a natural way to model logical inference, semantic composition, and context-dependent meanings, where Bayesian inference plays a crucial role. I demonstrate the feasibility of this approach by training a model on WikiWoods, a parsed version of the English Wikipedia, and evaluating it on three tasks. The results indicate that the model can learn information not captured by vector space models.Schiff Fund Studentshi
Towards a Catalogue of Linguistic Graph Banks
Graphs exceeding the formal complexity of rooted trees are of growing relevance to much NLP research. Although formally well understood in graph theory, there is substantial variation in the types of linguistic graphs, as well as in the interpretation of various structural properties. To provide a common terminology and transparent statistics across different collections of graphs in NLP, we propose to establish a shared community resource with an open-source reference implementation for common statistics