5,565 research outputs found
One-Shot Fine-Grained Instance Retrieval
Fine-Grained Visual Categorization (FGVC) has achieved significant progress
recently. However, the number of fine-grained species could be huge and
dynamically increasing in real scenarios, making it difficult to recognize
unseen objects under the current FGVC framework. This raises an open issue to
perform large-scale fine-grained identification without a complete training
set. Aiming to conquer this issue, we propose a retrieval task named One-Shot
Fine-Grained Instance Retrieval (OSFGIR). "One-Shot" denotes the ability of
identifying unseen objects through a fine-grained retrieval task assisted with
an incomplete auxiliary training set. This paper first presents the detailed
description to OSFGIR task and our collected OSFGIR-378K dataset. Next, we
propose the Convolutional and Normalization Networks (CN-Nets) learned on the
auxiliary dataset to generate a concise and discriminative representation.
Finally, we present a coarse-to-fine retrieval framework consisting of three
components, i.e., coarse retrieval, fine-grained retrieval, and query
expansion, respectively. The framework progressively retrieves images with
similar semantics, and performs fine-grained identification. Experiments show
our OSFGIR framework achieves significantly better accuracy and efficiency than
existing FGVC and image retrieval methods, thus could be a better solution for
large-scale fine-grained object identification.Comment: Accepted by MM2017, 9 pages, 7 figure
Fine-grained Image Classification by Exploring Bipartite-Graph Labels
Given a food image, can a fine-grained object recognition engine tell "which
restaurant which dish" the food belongs to? Such ultra-fine grained image
recognition is the key for many applications like search by images, but it is
very challenging because it needs to discern subtle difference between classes
while dealing with the scarcity of training data. Fortunately, the ultra-fine
granularity naturally brings rich relationships among object classes. This
paper proposes a novel approach to exploit the rich relationships through
bipartite-graph labels (BGL). We show how to model BGL in an overall
convolutional neural networks and the resulting system can be optimized through
back-propagation. We also show that it is computationally efficient in
inference thanks to the bipartite structure. To facilitate the study, we
construct a new food benchmark dataset, which consists of 37,885 food images
collected from 6 restaurants and totally 975 menus. Experimental results on
this new food and three other datasets demonstrates BGL advances previous works
in fine-grained object recognition. An online demo is available at
http://www.f-zhou.com/fg_demo/
Fine-grained Discriminative Localization via Saliency-guided Faster R-CNN
Discriminative localization is essential for fine-grained image
classification task, which devotes to recognizing hundreds of subcategories in
the same basic-level category. Reflecting on discriminative regions of objects,
key differences among different subcategories are subtle and local. Existing
methods generally adopt a two-stage learning framework: The first stage is to
localize the discriminative regions of objects, and the second is to encode the
discriminative features for training classifiers. However, these methods
generally have two limitations: (1) Separation of the two-stage learning is
time-consuming. (2) Dependence on object and parts annotations for
discriminative localization learning leads to heavily labor-consuming labeling.
It is highly challenging to address these two important limitations
simultaneously. Existing methods only focus on one of them. Therefore, this
paper proposes the discriminative localization approach via saliency-guided
Faster R-CNN to address the above two limitations at the same time, and our
main novelties and advantages are: (1) End-to-end network based on Faster R-CNN
is designed to simultaneously localize discriminative regions and encode
discriminative features, which accelerates classification speed. (2)
Saliency-guided localization learning is proposed to localize the
discriminative region automatically, avoiding labor-consuming labeling. Both
are jointly employed to simultaneously accelerate classification speed and
eliminate dependence on object and parts annotations. Comparing with the
state-of-the-art methods on the widely-used CUB-200-2011 dataset, our approach
achieves both the best classification accuracy and efficiency.Comment: 9 pages, to appear in ACM MM 201
Microgenesis, immediate experience and visual processes in reading
The concept of microgenesis refers to the development on a brief present-time scale of a percept, a thought, an object of imagination, or an expression. It defines the occurrence of immediate experience as dynamic unfolding and differentiation in which the ‘germ’ of the final experience is already embodied in the early stages of its development. Immediate experience typically concerns the focal experience of an object that is thematized as a ‘figure’ in the global field of consciousness; this can involve a percept, thought, object of imagination, or expression (verbal and/or gestural). Yet, whatever its modality or content, focal experience is postulated to develop and stabilize through dynamic differentiation and unfolding. Such a microgenetic description of immediate experience substantiates a phenomenological and genetic theory of cognition where any process of perception, thought, expression or imagination is primarily a process of genetic differentiation and development, rather than one of detection (of a stimulus array or information), transformation, and integration (of multiple primitive components) as theories of cognitivist kind have contended.
My purpose in this essay is to provide an overview of the main constructs of microgenetic theory, to outline its potential avenues of future development in the field of cognitive science, and to illustrate an application of the theory to research, using visual processes in reading as an example
- …