410 research outputs found
Backpropagating through Structured Argmax using a SPIGOT
We introduce the structured projection of intermediate gradients optimization
technique (SPIGOT), a new method for backpropagating through neural networks
that include hard-decision structured predictions (e.g., parsing) in
intermediate layers. SPIGOT requires no marginal inference, unlike structured
attention networks (Kim et al., 2017) and some reinforcement learning-inspired
solutions (Yogatama et al., 2017). Like so-called straight-through estimators
(Hinton, 2012), SPIGOT defines gradient-like quantities associated with
intermediate nondifferentiable operations, allowing backpropagation before and
after them; SPIGOT's proxy aims to ensure that, after a parameter update, the
intermediate structure will remain well-formed.
We experiment on two structured NLP pipelines: syntactic-then-semantic
dependency parsing, and semantic parsing followed by sentiment classification.
We show that training with SPIGOT leads to a larger improvement on the
downstream task than a modularly-trained pipeline, the straight-through
estimator, and structured attention, reaching a new state of the art on
semantic dependency parsing.Comment: ACL 201
Massive Choice, Ample Tasks (MaChAmp): A Toolkit for Multi-task Learning in NLP
Transfer learning, particularly approaches that combine multi-task learning
with pre-trained contextualized embeddings and fine-tuning, have advanced the
field of Natural Language Processing tremendously in recent years. In this
paper we present MaChAmp, a toolkit for easy fine-tuning of contextualized
embeddings in multi-task settings. The benefits of MaChAmp are its flexible
configuration options, and the support of a variety of natural language
processing tasks in a uniform toolkit, from text classification and sequence
labeling to dependency parsing, masked language modeling, and text generation.Comment: https://machamp-nlp.github.io
Linguistically-Informed Neural Architectures for Lexical, Syntactic and Semantic Tasks in Sanskrit
The primary focus of this thesis is to make Sanskrit manuscripts more
accessible to the end-users through natural language technologies. The
morphological richness, compounding, free word orderliness, and low-resource
nature of Sanskrit pose significant challenges for developing deep learning
solutions. We identify four fundamental tasks, which are crucial for developing
a robust NLP technology for Sanskrit: word segmentation, dependency parsing,
compound type identification, and poetry analysis. The first task, Sanskrit
Word Segmentation (SWS), is a fundamental text processing task for any other
downstream applications. However, it is challenging due to the sandhi
phenomenon that modifies characters at word boundaries. Similarly, the existing
dependency parsing approaches struggle with morphologically rich and
low-resource languages like Sanskrit. Compound type identification is also
challenging for Sanskrit due to the context-sensitive semantic relation between
components. All these challenges result in sub-optimal performance in NLP
applications like question answering and machine translation. Finally, Sanskrit
poetry has not been extensively studied in computational linguistics.
While addressing these challenges, this thesis makes various contributions:
(1) The thesis proposes linguistically-informed neural architectures for these
tasks. (2) We showcase the interpretability and multilingual extension of the
proposed systems. (3) Our proposed systems report state-of-the-art performance.
(4) Finally, we present a neural toolkit named SanskritShala, a web-based
application that provides real-time analysis of input for various NLP tasks.
Overall, this thesis contributes to making Sanskrit manuscripts more accessible
by developing robust NLP technology and releasing various resources, datasets,
and web-based toolkit.Comment: Ph.D. dissertatio
Distributed Representations for Compositional Semantics
The mathematical representation of semantics is a key issue for Natural
Language Processing (NLP). A lot of research has been devoted to finding ways
of representing the semantics of individual words in vector spaces.
Distributional approaches --- meaning distributed representations that exploit
co-occurrence statistics of large corpora --- have proved popular and
successful across a number of tasks. However, natural language usually comes in
structures beyond the word level, with meaning arising not only from the
individual words but also the structure they are contained in at the phrasal or
sentential level. Modelling the compositional process by which the meaning of
an utterance arises from the meaning of its parts is an equally fundamental
task of NLP.
This dissertation explores methods for learning distributed semantic
representations and models for composing these into representations for larger
linguistic units. Our underlying hypothesis is that neural models are a
suitable vehicle for learning semantically rich representations and that such
representations in turn are suitable vehicles for solving important tasks in
natural language processing. The contribution of this thesis is a thorough
evaluation of our hypothesis, as part of which we introduce several new
approaches to representation learning and compositional semantics, as well as
multiple state-of-the-art models which apply distributed semantic
representations to various tasks in NLP.Comment: DPhil Thesis, University of Oxford, Submitted and accepted in 201
- …