5,314 research outputs found
Human-Machine CRFs for Identifying Bottlenecks in Holistic Scene Understanding
Recent trends in image understanding have pushed for holistic scene
understanding models that jointly reason about various tasks such as object
detection, scene recognition, shape analysis, contextual reasoning, and local
appearance based classifiers. In this work, we are interested in understanding
the roles of these different tasks in improved scene understanding, in
particular semantic segmentation, object detection and scene recognition.
Towards this goal, we "plug-in" human subjects for each of the various
components in a state-of-the-art conditional random field model. Comparisons
among various hybrid human-machine CRFs give us indications of how much "head
room" there is to improve scene understanding by focusing research efforts on
various individual tasks
Multi-Image Semantic Matching by Mining Consistent Features
This work proposes a multi-image matching method to estimate semantic
correspondences across multiple images. In contrast to the previous methods
that optimize all pairwise correspondences, the proposed method identifies and
matches only a sparse set of reliable features in the image collection. In this
way, the proposed method is able to prune nonrepeatable features and also
highly scalable to handle thousands of images. We additionally propose a
low-rank constraint to ensure the geometric consistency of feature
correspondences over the whole image collection. Besides the competitive
performance on multi-graph matching and semantic flow benchmarks, we also
demonstrate the applicability of the proposed method for reconstructing
object-class models and discovering object-class landmarks from images without
using any annotation.Comment: CVPR 201
Unsupervised learning of object landmarks by factorized spatial embeddings
Learning automatically the structure of object categories remains an
important open problem in computer vision. In this paper, we propose a novel
unsupervised approach that can discover and learn landmarks in object
categories, thus characterizing their structure. Our approach is based on
factorizing image deformations, as induced by a viewpoint change or an object
deformation, by learning a deep neural network that detects landmarks
consistently with such visual effects. Furthermore, we show that the learned
landmarks establish meaningful correspondences between different object
instances in a category without having to impose this requirement explicitly.
We assess the method qualitatively on a variety of object types, natural and
man-made. We also show that our unsupervised landmarks are highly predictive of
manually-annotated landmarks in face benchmark datasets, and can be used to
regress these with a high degree of accuracy.Comment: To be published in ICCV 201
- …