7,974 research outputs found
A Graph Theoretic Approach for Object Shape Representation in Compositional Hierarchies Using a Hybrid Generative-Descriptive Model
A graph theoretic approach is proposed for object shape representation in a
hierarchical compositional architecture called Compositional Hierarchy of Parts
(CHOP). In the proposed approach, vocabulary learning is performed using a
hybrid generative-descriptive model. First, statistical relationships between
parts are learned using a Minimum Conditional Entropy Clustering algorithm.
Then, selection of descriptive parts is defined as a frequent subgraph
discovery problem, and solved using a Minimum Description Length (MDL)
principle. Finally, part compositions are constructed by compressing the
internal data representation with discovered substructures. Shape
representation and computational complexity properties of the proposed approach
and algorithms are examined using six benchmark two-dimensional shape image
datasets. Experiments show that CHOP can employ part shareability and indexing
mechanisms for fast inference of part compositions using learned shape
vocabularies. Additionally, CHOP provides better shape retrieval performance
than the state-of-the-art shape retrieval methods.Comment: Paper : 17 pages. 13th European Conference on Computer Vision (ECCV
2014), Zurich, Switzerland, September 6-12, 2014, Proceedings, Part III, pp
566-581. Supplementary material can be downloaded from
http://link.springer.com/content/esm/chp:10.1007/978-3-319-10578-9_37/file/MediaObjects/978-3-319-10578-9_37_MOESM1_ESM.pd
Visual Question Answering: A Survey of Methods and Datasets
Visual Question Answering (VQA) is a challenging task that has received
increasing attention from both the computer vision and the natural language
processing communities. Given an image and a question in natural language, it
requires reasoning over visual elements of the image and general knowledge to
infer the correct answer. In the first part of this survey, we examine the
state of the art by comparing modern approaches to the problem. We classify
methods by their mechanism to connect the visual and textual modalities. In
particular, we examine the common approach of combining convolutional and
recurrent neural networks to map images and questions to a common feature
space. We also discuss memory-augmented and modular architectures that
interface with structured knowledge bases. In the second part of this survey,
we review the datasets available for training and evaluating VQA systems. The
various datatsets contain questions at different levels of complexity, which
require different capabilities and types of reasoning. We examine in depth the
question/answer pairs from the Visual Genome project, and evaluate the
relevance of the structured annotations of images with scene graphs for VQA.
Finally, we discuss promising future directions for the field, in particular
the connection to structured knowledge bases and the use of natural language
processing models.Comment: 25 page
Hybrid image representation methods for automatic image annotation: a survey
In most automatic image annotation systems, images are represented with low level features using either global
methods or local methods. In global methods, the entire image is used as a unit. Local methods divide images into blocks where fixed-size sub-image blocks are adopted as sub-units; or into regions by using segmented regions as sub-units in images. In contrast to typical automatic image annotation methods that use either global or local features exclusively, several recent methods have considered incorporating the two kinds of information, and believe that the combination of the two levels of features is
beneficial in annotating images. In this paper, we provide a
survey on automatic image annotation techniques according to
one aspect: feature extraction, and, in order to complement
existing surveys in literature, we focus on the emerging image annotation methods: hybrid methods that combine both global and local features for image representation
- âŠ