12 research outputs found
Weakly Supervised Content Selection for Improved Image Captioning
Image captioning involves identifying semantic concepts in the scene and
describing them in fluent natural language. Recent approaches do not explicitly
model the semantic concepts and train the model only for the end goal of
caption generation. Such models lack interpretability and controllability,
primarily due to sub-optimal content selection. We address this problem by
breaking down the captioning task into two simpler, manageable and more
controllable tasks -- skeleton prediction and skeleton-based caption
generation. We approach the former as a weakly supervised task, using a simple
off-the-shelf language syntax parser and avoiding the need for additional human
annotations; the latter uses a supervised-learning approach. We investigate
three methods of conditioning the caption on skeleton in the encoder, decoder
and both. Our compositional model generates significantly better quality
captions on out of domain test images, as judged by human annotators.
Additionally, we demonstrate the cross-language effectiveness of the English
skeleton to other languages including French, Italian, German, Spanish and
Hindi. This compositional nature of captioning exhibits the potential of
unpaired image captioning, thereby reducing the dependence on expensive
image-caption pairs. Furthermore, we investigate the use of skeletons as a knob
to control certain properties of the generated image caption, such as length,
content, and gender expression
Continual Dialogue State Tracking via Example-Guided Question Answering
Dialogue systems are frequently updated to accommodate new services, but
naively updating them by continually training with data for new services in
diminishing performance on previously learnt services. Motivated by the insight
that dialogue state tracking (DST), a crucial component of dialogue systems
that estimates the user's goal as a conversation proceeds, is a simple natural
language understanding task, we propose reformulating it as a bundle of
granular example-guided question answering tasks to minimize the task shift
between services and thus benefit continual learning. Our approach alleviates
service-specific memorization and teaches a model to contextualize the given
question and example to extract the necessary information from the
conversation. We find that a model with just 60M parameters can achieve a
significant boost by learning to learn from in-context examples retrieved by a
retriever trained to identify turns with similar dialogue state changes.
Combining our method with dialogue-level memory replay, our approach attains
state of the art performance on DST continual learning metrics without relying
on any complex regularization or parameter expansion methods.Comment: 11 pages, EMNLP 202
Localized Symbolic Knowledge Distillation for Visual Commonsense Models
Instruction following vision-language (VL) models offer a flexible interface
that supports a broad range of multimodal tasks in a zero-shot fashion.
However, interfaces that operate on full images do not directly enable the user
to "point to" and access specific regions within images. This capability is
important not only to support reference-grounded VL benchmarks, but also, for
practical applications that require precise within-image reasoning. We build
Localized Visual Commonsense models, which allow users to specify (multiple)
regions as input. We train our model by sampling localized commonsense
knowledge from a large language model (LLM): specifically, we prompt an LLM to
collect commonsense knowledge given a global literal image description and a
local literal region description automatically generated by a set of VL models.
With a separately trained critic model that selects high-quality examples, we
find that training on the localized commonsense corpus can successfully distill
existing VL models to support a reference-as-input interface. Empirical results
and human evaluations in a zero-shot setup demonstrate that our distillation
method results in more precise VL models of reasoning compared to a baseline of
passing a generated referring expression to an LLM.Comment: Neurips 202
GEMv2 : Multilingual NLG benchmarking in a single line of code
Evaluation in machine learning is usually informed by past choices, for example which datasets or metrics to use. This standardization enables the comparison on equal footing using leaderboards, but the evaluation choices become sub-optimal as better alternatives arise. This problem is especially pertinent in natural language generation which requires ever-improving suites of datasets, metrics, and human evaluation to make definitive claims. To make following best model evaluation practices easier, we introduce GEMv2. The new version of the Generation, Evaluation, and Metrics Benchmark introduces a modular infrastructure for dataset, model, and metric developers to benefit from each others work. GEMv2 supports 40 documented datasets in 51 languages. Models for all datasets can be evaluated online and our interactive data card creation and rendering tools make it easier to add new datasets to the living benchmark.Peer reviewe