261,748 research outputs found
Findings of the E2E NLG Challenge
This paper summarises the experimental setup and results of the first shared
task on end-to-end (E2E) natural language generation (NLG) in spoken dialogue
systems. Recent end-to-end generation systems are promising since they reduce
the need for data annotation. However, they are currently limited to small,
delexicalised datasets. The E2E NLG shared task aims to assess whether these
novel approaches can generate better-quality output by learning from a dataset
containing higher lexical richness, syntactic complexity and diverse discourse
phenomena. We compare 62 systems submitted by 17 institutions, covering a wide
range of approaches, including machine learning architectures -- with the
majority implementing sequence-to-sequence models (seq2seq) -- as well as
systems based on grammatical rules and templates.Comment: Accepted to INLG 201
Dynamic Composition of Functions for Modular Learning
Compositionality is useful to reduce the complexity of machine learning models and increase their generalization capabilities, because new problems can be linked to the composition of existing solutions. Recent work has shown that compositional approaches can offer substantial benefits over a wide variety of tasks, from multi-task learning over visual question-answering to natural language inference, among others. A key variant is functional compositionality, where a meta-learner composes different (trainable) functions into complex machine learning models. In this thesis, I generalize existing approaches to functional compositionality under the umbrella of the routing paradigm, where trainable arbitrary functions are \u27stacked\u27 to form complex machine learning models
Posterior Regularization for Learning with Side Information and Weak Supervision
Supervised machine learning techniques have been very successful for a variety of tasks and domains including natural language processing, computer vision, and computational biology. Unfortunately, their use often requires creation of large problem-specific training corpora that can make these methods prohibitively expensive. At the same time, we often have access to external problem-specific information that we cannot alway easily incorporate. We might know how to solve the problem in another domain (e.g. for a different language); we might have access to cheap but noisy training data; or a domain expert might be available who would be able to guide a human learner much more efficiently than by simply creating an IID training corpus. A key challenge for weakly supervised learning is then how to incorporate such kinds of auxiliary information arising from indirect supervision.
In this thesis, we present Posterior Regularization, a probabilistic framework for structured, weakly supervised learning. Posterior Regularization is applicable to probabilistic models with latent variables and exports a language for specifying constraints or preferences about posterior distributions of latent variables. We show that this language is powerful enough to specify realistic prior knowledge for a variety applications in natural language processing. Additionally, because Posterior Regularization separates model complexity from the complexity of structural constraints, it can be used for structured problems with relatively little computational overhead. We apply Posterior Regularization to several problems in natural language processing including word alignment for machine translation, transfer of linguistic resources across languages and grammar induction. Additionally, we find that we can apply Posterior Regularization to the problem of multi-view learning, achieving particularly good results for transfer learning. We also explore the theoretical relationship between Posterior Regularization and other proposed frameworks for encoding this kind of prior knowledge, and show a close relationship to Constraint Driven Learning as well as to Generalized Expectation Constraints
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
This paper introduces the Multi-Genre Natural Language Inference (MultiNLI)
corpus, a dataset designed for use in the development and evaluation of machine
learning models for sentence understanding. In addition to being one of the
largest corpora available for the task of NLI, at 433k examples, this corpus
improves upon available resources in its coverage: it offers data from ten
distinct genres of written and spoken English--making it possible to evaluate
systems on nearly the full complexity of the language--and it offers an
explicit setting for the evaluation of cross-genre domain adaptation.Comment: 10 pages, 1 figures, 5 tables. v2 corrects a misreported accuracy
number for the CBOW model in the 'matched' setting. v3 adds a discussion of
the difficulty of the corpus to the analysis section. v4 is the version that
was accepted to NAACL201
Recommended from our members
Model Performance as an Estimator of Language Complexity
Quantifying the complexity of a natural language is a difficult task on its own and comparing two or more languages typically requires establishing a reference point and determining the biases and context of the languages being compared. I propose a new metric for unbiasedly quantifying the complexity of a language in a way that allows for easy comparison between languages. I use a variety of common machine learning solutions for tasks such as part- of-speech tagging and language modeling, then analyze the learning ability of these models as parameters are adjusted. I then use the evaluation metrics from these tasks to compare similar models trained on different languages. I find that the evaluation metrics accuracy and perplexity mimic the behavior of four metrics found in linguistics literature and can be used to compare relative complexities
- …