618,405 research outputs found
SCAN: Learning Hierarchical Compositional Visual Concepts
The seemingly infinite diversity of the natural world arises from a
relatively small set of coherent rules, such as the laws of physics or
chemistry. We conjecture that these rules give rise to regularities that can be
discovered through primarily unsupervised experiences and represented as
abstract concepts. If such representations are compositional and hierarchical,
they can be recombined into an exponentially large set of new concepts. This
paper describes SCAN (Symbol-Concept Association Network), a new framework for
learning such abstractions in the visual domain. SCAN learns concepts through
fast symbol association, grounding them in disentangled visual primitives that
are discovered in an unsupervised manner. Unlike state of the art multimodal
generative model baselines, our approach requires very few pairings between
symbols and images and makes no assumptions about the form of symbol
representations. Once trained, SCAN is capable of multimodal bi-directional
inference, generating a diverse set of image samples from symbolic descriptions
and vice versa. It also allows for traversal and manipulation of the implicit
hierarchy of visual concepts through symbolic instructions and learnt logical
recombination operations. Such manipulations enable SCAN to break away from its
training data distribution and imagine novel visual concepts through
symbolically instructed recombination of previously learnt concepts
The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision
We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns
visual concepts, words, and semantic parsing of sentences without explicit
supervision on any of them; instead, our model learns by simply looking at
images and reading paired questions and answers. Our model builds an
object-based scene representation and translates sentences into executable,
symbolic programs. To bridge the learning of two modules, we use a
neuro-symbolic reasoning module that executes these programs on the latent
scene representation. Analogical to human concept learning, the perception
module learns visual concepts based on the language description of the object
being referred to. Meanwhile, the learned visual concepts facilitate learning
new words and parsing new sentences. We use curriculum learning to guide the
searching over the large compositional space of images and language. Extensive
experiments demonstrate the accuracy and efficiency of our model on learning
visual concepts, word representations, and semantic parsing of sentences.
Further, our method allows easy generalization to new object attributes,
compositions, language concepts, scenes and questions, and even new program
domains. It also empowers applications including visual question answering and
bidirectional image-text retrieval.Comment: ICLR 2019 (Oral). Project page: http://nscl.csail.mit.edu
Learning Compositional Visual Concepts with Mutual Consistency
Compositionality of semantic concepts in image synthesis and analysis is
appealing as it can help in decomposing known and generatively recomposing
unknown data. For instance, we may learn concepts of changing illumination,
geometry or albedo of a scene, and try to recombine them to generate physically
meaningful, but unseen data for training and testing. In practice however we
often do not have samples from the joint concept space available: We may have
data on illumination change in one data set and on geometric change in another
one without complete overlap. We pose the following question: How can we learn
two or more concepts jointly from different data sets with mutual consistency
where we do not have samples from the full joint space? We present a novel
answer in this paper based on cyclic consistency over multiple concepts,
represented individually by generative adversarial networks (GANs). Our method,
ConceptGAN, can be understood as a drop in for data augmentation to improve
resilience for real world applications. Qualitative and quantitative
evaluations demonstrate its efficacy in generating semantically meaningful
images, as well as one shot face verification as an example application.Comment: 10 pages, 8 figures, 4 tables, CVPR 201
Solving Bongard Problems with a Visual Language and Pragmatic Reasoning
More than 50 years ago Bongard introduced 100 visual concept learning
problems as a testbed for intelligent vision systems. These problems are now
known as Bongard problems. Although they are well known in the cognitive
science and AI communities only moderate progress has been made towards
building systems that can solve a substantial subset of them. In the system
presented here, visual features are extracted through image processing and then
translated into a symbolic visual vocabulary. We introduce a formal language
that allows representing complex visual concepts based on this vocabulary.
Using this language and Bayesian inference, complex visual concepts can be
induced from the examples that are provided in each Bongard problem. Contrary
to other concept learning problems the examples from which concepts are induced
are not random in Bongard problems, instead they are carefully chosen to
communicate the concept, hence requiring pragmatic reasoning. Taking pragmatic
reasoning into account we find good agreement between the concepts with high
posterior probability and the solutions formulated by Bongard himself. While
this approach is far from solving all Bongard problems, it solves the biggest
fraction yet
- …