1,130 research outputs found
Greedy Structure Learning of Hierarchical Compositional Models
In this work, we consider the problem of learning a hierarchical generative model of an object from a set of im-ages which show examples of the object in the presenceof variable background clutter. Existing approaches tothis problem are limited by making strong a-priori assump-tions about the object’s geometric structure and require seg-mented training data for learning. In this paper, we pro-pose a novel framework for learning hierarchical compo-sitional models (HCMs) which do not suffer from the men-tioned limitations. We present a generalized formulation ofHCMs and describe a greedy structure learning frameworkthat consists of two phases: Bottom-up part learning andtop-down model composition. Our framework integratesthe foreground-background segmentation problem into thestructure learning task via a background model. As a result, we can jointly optimize for the number of layers in thehierarchy, the number of parts per layer and a foreground-background segmentation based on class labels only. Weshow that the learned HCMs are semantically meaningfuland achieve competitive results when compared to othergenerative object models at object classification on a stan-dard transfer learning dataset
Do Multi-Sense Embeddings Improve Natural Language Understanding?
Learning a distinct representation for each sense of an ambiguous word could
lead to more powerful and fine-grained models of vector-space representations.
Yet while `multi-sense' methods have been proposed and tested on artificial
word-similarity tasks, we don't know if they improve real natural language
understanding tasks. In this paper we introduce a multi-sense embedding model
based on Chinese Restaurant Processes that achieves state of the art
performance on matching human word similarity judgments, and propose a
pipelined architecture for incorporating multi-sense embeddings into language
understanding.
We then test the performance of our model on part-of-speech tagging, named
entity recognition, sentiment analysis, semantic relation identification and
semantic relatedness, controlling for embedding dimensionality. We find that
multi-sense embeddings do improve performance on some tasks (part-of-speech
tagging, semantic relation identification, semantic relatedness) but not on
others (named entity recognition, various forms of sentiment analysis). We
discuss how these differences may be caused by the different role of word sense
information in each of the tasks. The results highlight the importance of
testing embedding models in real applications
Exploiting Compositionality to Explore a Large Space of Model Structures
The recent proliferation of richly structured probabilistic models raises the question of how to automatically determine an appropriate model for a dataset. We investigate this question for a space of matrix decomposition models which can express a variety of widely used models from unsupervised learning. To enable model selection, we organize these models into a context-free grammar which generates a wide variety of structures through the compositional application of a few simple rules. We use our grammar to generically and efficiently infer latent components and estimate predictive likelihood for nearly 2500 structures using a small toolbox of reusable algorithms. Using a greedy search over our grammar, we automatically choose the decomposition structure from raw data by evaluating only a small fraction of all models. The proposed method typically finds the correct structure for synthetic data and backs off gracefully to simpler models under heavy noise. It learns sensible structures for datasets as diverse as image patches, motion capture, 20 Questions, and U.S. Senate votes, all using exactly the same code.United States. Army Research Office (ARO grant W911NF-08-1-0242)American Society for Engineering Education. National Defense Science and Engineering Graduate Fellowshi
Visual Concepts and Compositional Voting
It is very attractive to formulate vision in terms of pattern theory
\cite{Mumford2010pattern}, where patterns are defined hierarchically by
compositions of elementary building blocks. But applying pattern theory to real
world images is currently less successful than discriminative methods such as
deep networks. Deep networks, however, are black-boxes which are hard to
interpret and can easily be fooled by adding occluding objects. It is natural
to wonder whether by better understanding deep networks we can extract building
blocks which can be used to develop pattern theoretic models. This motivates us
to study the internal representations of a deep network using vehicle images
from the PASCAL3D+ dataset. We use clustering algorithms to study the
population activities of the features and extract a set of visual concepts
which we show are visually tight and correspond to semantic parts of vehicles.
To analyze this we annotate these vehicles by their semantic parts to create a
new dataset, VehicleSemanticParts, and evaluate visual concepts as unsupervised
part detectors. We show that visual concepts perform fairly well but are
outperformed by supervised discriminative methods such as Support Vector
Machines (SVM). We next give a more detailed analysis of visual concepts and
how they relate to semantic parts. Following this, we use the visual concepts
as building blocks for a simple pattern theoretical model, which we call
compositional voting. In this model several visual concepts combine to detect
semantic parts. We show that this approach is significantly better than
discriminative methods like SVM and deep networks trained specifically for
semantic part detection. Finally, we return to studying occlusion by creating
an annotated dataset with occlusion, called VehicleOcclusion, and show that
compositional voting outperforms even deep networks when the amount of
occlusion becomes large.Comment: It is accepted by Annals of Mathematical Sciences and Application
Building Program Vector Representations for Deep Learning
Deep learning has made significant breakthroughs in various fields of
artificial intelligence. Advantages of deep learning include the ability to
capture highly complicated features, weak involvement of human engineering,
etc. However, it is still virtually impossible to use deep learning to analyze
programs since deep architectures cannot be trained effectively with pure back
propagation. In this pioneering paper, we propose the "coding criterion" to
build program vector representations, which are the premise of deep learning
for program analysis. Our representation learning approach directly makes deep
learning a reality in this new field. We evaluate the learned vector
representations both qualitatively and quantitatively. We conclude, based on
the experiments, the coding criterion is successful in building program
representations. To evaluate whether deep learning is beneficial for program
analysis, we feed the representations to deep neural networks, and achieve
higher accuracy in the program classification task than "shallow" methods, such
as logistic regression and the support vector machine. This result confirms the
feasibility of deep learning to analyze programs. It also gives primary
evidence of its success in this new field. We believe deep learning will become
an outstanding technique for program analysis in the near future.Comment: This paper was submitted to ICSE'1
Compositional Convolutional Neural Networks: A Deep Architecture with Innate Robustness to Partial Occlusion
Recent findings show that deep convolutional neural networks (DCNNs) do not
generalize well under partial occlusion. Inspired by the success of
compositional models at classifying partially occluded objects, we propose to
integrate compositional models and DCNNs into a unified deep model with innate
robustness to partial occlusion. We term this architecture Compositional
Convolutional Neural Network. In particular, we propose to replace the fully
connected classification head of a DCNN with a differentiable compositional
model. The generative nature of the compositional model enables it to localize
occluders and subsequently focus on the non-occluded parts of the object. We
conduct classification experiments on artificially occluded images as well as
real images of partially occluded objects from the MS-COCO dataset. The results
show that DCNNs do not classify occluded objects robustly, even when trained
with data that is strongly augmented with partial occlusions. Our proposed
model outperforms standard DCNNs by a large margin at classifying partially
occluded objects, even when it has not been exposed to occluded objects during
training. Additional experiments demonstrate that CompositionalNets can also
localize the occluders accurately, despite being trained with class labels
only. The code used in this work is publicly available.Comment: CVPR 2020; Code is available
https://github.com/AdamKortylewski/CompositionalNets; Supplementary material:
https://adamkortylewski.com/data/compnet_supp.pd
- …