7,640 research outputs found
On Conditional and Compositional Language Model Differentiable Prompting
Prompts have been shown to be an effective method to adapt a frozen
Pretrained Language Model (PLM) to perform well on downstream tasks. Prompts
can be represented by a human-engineered word sequence or by a learned
continuous embedding. In this work, we investigate conditional and
compositional differentiable prompting. We propose a new model, Prompt
Production System (PRopS), which learns to transform task instructions or input
metadata, into continuous prompts that elicit task-specific outputs from the
PLM. Our model uses a modular network structure based on our neural formulation
of Production Systems, which allows the model to learn discrete rules -- neural
functions that learn to specialize in transforming particular prompt input
patterns, making it suitable for compositional transfer learning and few-shot
learning. We present extensive empirical and theoretical analysis and show that
PRopS consistently surpasses other PLM adaptation techniques, and often
improves upon fully fine-tuned models, on compositional generalization tasks,
controllable summarization and multilingual translation, while needing fewer
trainable parameters.Comment: Accepted at International Joint Conference on Artificial Intelligence
(IJCAI) 202
Learning with Latent Language
The named concepts and compositional operators present in natural language
provide a rich source of information about the kinds of abstractions humans use
to navigate the world. Can this linguistic background knowledge improve the
generality and efficiency of learned classifiers and control policies? This
paper aims to show that using the space of natural language strings as a
parameter space is an effective way to capture natural task structure. In a
pretraining phase, we learn a language interpretation model that transforms
inputs (e.g. images) into outputs (e.g. labels) given natural language
descriptions. To learn a new concept (e.g. a classifier), we search directly in
the space of descriptions to minimize the interpreter's loss on training
examples. Crucially, our models do not require language data to learn these
concepts: language is used only in pretraining to impose structure on
subsequent learning. Results on image classification, text editing, and
reinforcement learning show that, in all settings, models with a linguistic
parameterization outperform those without
The Fast and the Flexible: training neural networks to learn to follow instructions from small data
Learning to follow human instructions is a long-pursued goal in artificial
intelligence. The task becomes particularly challenging if no prior knowledge
of the employed language is assumed while relying only on a handful of examples
to learn from. Work in the past has relied on hand-coded components or manually
engineered features to provide strong inductive biases that make learning in
such situations possible. In contrast, here we seek to establish whether this
knowledge can be acquired automatically by a neural network system through a
two phase training procedure: A (slow) offline learning stage where the network
learns about the general structure of the task and a (fast) online adaptation
phase where the network learns the language of a new given speaker. Controlled
experiments show that when the network is exposed to familiar instructions but
containing novel words, the model adapts very efficiently to the new
vocabulary. Moreover, even for human speakers whose language usage can depart
significantly from our artificial training language, our network can still make
use of its automatically acquired inductive bias to learn to follow
instructions more effectively
FiLM: Visual Reasoning with a General Conditioning Layer
We introduce a general-purpose conditioning method for neural networks called
FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network
computation via a simple, feature-wise affine transformation based on
conditioning information. We show that FiLM layers are highly effective for
visual reasoning - answering image-related questions which require a
multi-step, high-level process - a task which has proven difficult for standard
deep learning methods that do not explicitly model reasoning. Specifically, we
show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error
for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are
robust to ablations and architectural modifications, and 4) generalize well to
challenging, new data from few examples or even zero-shot.Comment: AAAI 2018. Code available at http://github.com/ethanjperez/film .
Extends arXiv:1707.0301
SCAN: Learning Hierarchical Compositional Visual Concepts
The seemingly infinite diversity of the natural world arises from a
relatively small set of coherent rules, such as the laws of physics or
chemistry. We conjecture that these rules give rise to regularities that can be
discovered through primarily unsupervised experiences and represented as
abstract concepts. If such representations are compositional and hierarchical,
they can be recombined into an exponentially large set of new concepts. This
paper describes SCAN (Symbol-Concept Association Network), a new framework for
learning such abstractions in the visual domain. SCAN learns concepts through
fast symbol association, grounding them in disentangled visual primitives that
are discovered in an unsupervised manner. Unlike state of the art multimodal
generative model baselines, our approach requires very few pairings between
symbols and images and makes no assumptions about the form of symbol
representations. Once trained, SCAN is capable of multimodal bi-directional
inference, generating a diverse set of image samples from symbolic descriptions
and vice versa. It also allows for traversal and manipulation of the implicit
hierarchy of visual concepts through symbolic instructions and learnt logical
recombination operations. Such manipulations enable SCAN to break away from its
training data distribution and imagine novel visual concepts through
symbolically instructed recombination of previously learnt concepts
- …