12,786 research outputs found
Does Entity Abstraction Help Generative Transformers Reason?
We study the utility of incorporating entity type abstractions into
pre-trained Transformers and test these methods on four NLP tasks requiring
different forms of logical reasoning: (1) compositional language understanding
with text-based relational reasoning (CLUTRR), (2) abductive reasoning
(ProofWriter), (3) multi-hop question answering (HotpotQA), and (4)
conversational question answering (CoQA). We propose and empirically explore
three ways to add such abstraction: (i) as additional input embeddings, (ii) as
a separate sequence to encode, and (iii) as an auxiliary prediction task for
the model. Overall, our analysis demonstrates that models with abstract entity
knowledge performs better than without it. The best abstraction aware models
achieved an overall accuracy of 88.8% and 91.8% compared to the baseline model
achieving 62.9% and 89.8% on CLUTRR and ProofWriter respectively. However, for
HotpotQA and CoQA, we find that F1 scores improve by only 0.5% on average. Our
results suggest that the benefit of explicit abstraction is significant in
formally defined logical reasoning settings requiring many reasoning hops, but
point to the notion that it is less beneficial for NLP tasks having less formal
logical structure.Comment: TMLR 2022; 28 pages; 9 tables; 1 figur
Visual Entailment: A Novel Task for Fine-Grained Image Understanding
Existing visual reasoning datasets such as Visual Question Answering (VQA),
often suffer from biases conditioned on the question, image or answer
distributions. The recently proposed CLEVR dataset addresses these limitations
and requires fine-grained reasoning but the dataset is synthetic and consists
of similar objects and sentence structures across the dataset.
In this paper, we introduce a new inference task, Visual Entailment (VE) -
consisting of image-sentence pairs whereby a premise is defined by an image,
rather than a natural language sentence as in traditional Textual Entailment
tasks. The goal of a trained VE model is to predict whether the image
semantically entails the text. To realize this task, we build a dataset SNLI-VE
based on the Stanford Natural Language Inference corpus and Flickr30k dataset.
We evaluate various existing VQA baselines and build a model called Explainable
Visual Entailment (EVE) system to address the VE task. EVE achieves up to 71%
accuracy and outperforms several other state-of-the-art VQA based models.
Finally, we demonstrate the explainability of EVE through cross-modal attention
visualizations. The SNLI-VE dataset is publicly available at
https://github.com/ necla-ml/SNLI-VE
A Boxology of Design Patterns for Hybrid Learning and Reasoning Systems
We propose a set of compositional design patterns to describe a large variety
of systems that combine statistical techniques from machine learning with
symbolic techniques from knowledge representation. As in other areas of
computer science (knowledge engineering, software engineering, ontology
engineering, process mining and others), such design patterns help to
systematize the literature, clarify which combinations of techniques serve
which purposes, and encourage re-use of software components. We have validated
our set of compositional design patterns against a large body of recent
literature.Comment: 12 pages,55 reference
The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision
We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns
visual concepts, words, and semantic parsing of sentences without explicit
supervision on any of them; instead, our model learns by simply looking at
images and reading paired questions and answers. Our model builds an
object-based scene representation and translates sentences into executable,
symbolic programs. To bridge the learning of two modules, we use a
neuro-symbolic reasoning module that executes these programs on the latent
scene representation. Analogical to human concept learning, the perception
module learns visual concepts based on the language description of the object
being referred to. Meanwhile, the learned visual concepts facilitate learning
new words and parsing new sentences. We use curriculum learning to guide the
searching over the large compositional space of images and language. Extensive
experiments demonstrate the accuracy and efficiency of our model on learning
visual concepts, word representations, and semantic parsing of sentences.
Further, our method allows easy generalization to new object attributes,
compositions, language concepts, scenes and questions, and even new program
domains. It also empowers applications including visual question answering and
bidirectional image-text retrieval.Comment: ICLR 2019 (Oral). Project page: http://nscl.csail.mit.edu
Visual Entailment Task for Visually-Grounded Language Learning
We introduce a new inference task - Visual Entailment (VE) - which differs
from traditional Textual Entailment (TE) tasks whereby a premise is defined by
an image, rather than a natural language sentence as in TE tasks. A novel
dataset SNLI-VE (publicly available at https://github.com/necla-ml/SNLI-VE) is
proposed for VE tasks based on the Stanford Natural Language Inference corpus
and Flickr30k. We introduce a differentiable architecture called the
Explainable Visual Entailment model (EVE) to tackle the VE problem. EVE and
several other state-of-the-art visual question answering (VQA) based models are
evaluated on the SNLI-VE dataset, facilitating grounded language understanding
and providing insights on how modern VQA based models perform.Comment: 4 pages, accepted by Visually Grounded Interaction and Language
(ViGIL) workshop in NeurIPS 201
- …