1,966 research outputs found
Learning with Latent Language
The named concepts and compositional operators present in natural language
provide a rich source of information about the kinds of abstractions humans use
to navigate the world. Can this linguistic background knowledge improve the
generality and efficiency of learned classifiers and control policies? This
paper aims to show that using the space of natural language strings as a
parameter space is an effective way to capture natural task structure. In a
pretraining phase, we learn a language interpretation model that transforms
inputs (e.g. images) into outputs (e.g. labels) given natural language
descriptions. To learn a new concept (e.g. a classifier), we search directly in
the space of descriptions to minimize the interpreter's loss on training
examples. Crucially, our models do not require language data to learn these
concepts: language is used only in pretraining to impose structure on
subsequent learning. Results on image classification, text editing, and
reinforcement learning show that, in all settings, models with a linguistic
parameterization outperform those without
Merging Weak and Active Supervision for Semantic Parsing
A semantic parser maps natural language commands (NLs) from the users to
executable meaning representations (MRs), which are later executed in certain
environment to obtain user-desired results. The fully-supervised training of
such parser requires NL/MR pairs, annotated by domain experts, which makes them
expensive to collect. However, weakly-supervised semantic parsers are learnt
only from pairs of NL and expected execution results, leaving the MRs latent.
While weak supervision is cheaper to acquire, learning from this input poses
difficulties. It demands that parsers search a large space with a very weak
learning signal and it is hard to avoid spurious MRs that achieve the correct
answer in the wrong way. These factors lead to a performance gap between
parsers trained in weakly- and fully-supervised setting. To bridge this gap, we
examine the intersection between weak supervision and active learning, which
allows the learner to actively select examples and query for manual annotations
as extra supervision to improve the model trained under weak supervision. We
study different active learning heuristics for selecting examples to query, and
various forms of extra supervision for such queries. We evaluate the
effectiveness of our method on two different datasets. Experiments on the
WikiSQL show that by annotating only 1.8% of examples, we improve over a
state-of-the-art weakly-supervised baseline by 6.4%, achieving an accuracy of
79.0%, which is only 1.3% away from the model trained with full supervision.
Experiments on WikiTableQuestions with human annotators show that our method
can improve the performance with only 100 active queries, especially for
weakly-supervised parsers learnt from a cold start.Comment: AAAI 2020 Main Track [Oral] (To appear
- …