21,803 research outputs found
Learning Generative Models with Visual Attention
Attention has long been proposed by psychologists as important for
effectively dealing with the enormous sensory stimulus available in the
neocortex. Inspired by the visual attention models in computational
neuroscience and the need of object-centric data for generative models, we
describe for generative learning framework using attentional mechanisms.
Attentional mechanisms can propagate signals from region of interest in a scene
to an aligned canonical representation, where generative modeling takes place.
By ignoring background clutter, generative models can concentrate their
resources on the object of interest. Our model is a proper graphical model
where the 2D Similarity transformation is a part of the top-down process. A
ConvNet is employed to provide good initializations during posterior inference
which is based on Hamiltonian Monte Carlo. Upon learning images of faces, our
model can robustly attend to face regions of novel test subjects. More
importantly, our model can learn generative models of new faces from a novel
dataset of large images where the face locations are not known.Comment: In the proceedings of Neural Information Processing Systems, 201
Finding any Waldo: zero-shot invariant and efficient visual search
Searching for a target object in a cluttered scene constitutes a fundamental
challenge in daily vision. Visual search must be selective enough to
discriminate the target from distractors, invariant to changes in the
appearance of the target, efficient to avoid exhaustive exploration of the
image, and must generalize to locate novel target objects with zero-shot
training. Previous work has focused on searching for perfect matches of a
target after extensive category-specific training. Here we show for the first
time that humans can efficiently and invariantly search for natural objects in
complex scenes. To gain insight into the mechanisms that guide visual search,
we propose a biologically inspired computational model that can locate targets
without exhaustive sampling and generalize to novel objects. The model provides
an approximation to the mechanisms integrating bottom-up and top-down signals
during search in natural scenes.Comment: Number of figures: 6 Number of supplementary figures: 1
Generalization of form in visual pattern classification.
Human observers were trained to criterion in classifying compound Gabor signals with sym- metry relationships, and were then tested with each of 18 blob-only versions of the learning set. General- ization to dark-only and light-only blob versions of the learning signals, as well as to dark-and-light blob versions was found to be excellent, thus implying virtually perfect generalization of the ability to classify mirror-image signals. The hypothesis that the learning signals are internally represented in terms of a 'blob code' with explicit labelling of contrast polarities was tested by predicting observed generalization behaviour in terms of various types of signal representations (pixelwise, Laplacian pyramid, curvature pyramid, ON/OFF, local maxima of Laplacian and curvature operators) and a minimum-distance rule. Most representations could explain generalization for dark-only and light-only blob patterns but not for the high-thresholded versions thereof. This led to the proposal of a structure-oriented blob-code. Whether such a code could be used in conjunction with simple classifiers or should be transformed into a propo- sitional scheme of representation operated upon by a rule-based classification process remains an open question
Grounding semantics in robots for Visual Question Answering
In this thesis I describe an operational implementation of an object detection and description system that incorporates in an end-to-end Visual Question Answering system and evaluated it on two visual question answering datasets for compositional language and elementary visual reasoning
- …