14 research outputs found
Class Specific Object Recognition using Kernel Gibbs Distributions
Feature selection is crucial for effective object recognition. The subject has been vastly investigated in the literature, with approaches spanning from heuristic choices to statistical methods, to integration of multiple cues. For all these techniques the final result is a common feature representation for all the considered object classes. In this paper we take a completely different approach, using class specific features. Our method consists of a probabilistic classifier that allows us to use separate feature vectors, selected specifically for each class. We obtain this result by extending previous work on Class Specific Classifiers and Kernel Gibbs distributions. The resulting method, that we call Kernel-Class Specific Classifier, allows us to use a different kernel for each object class by learning it. We present experiments of increasing level of difficulty, showing the power of our approach
BOLD Features to Detect Texture-less Objects
Object detection in images withstanding significant clut-ter and occlusion is still a challenging task whenever the object surface is characterized by poor informative content. We propose to tackle this problem by a compact and dis-tinctive representation of groups of neighboring line seg-ments aggregated over limited spatial supports and invari-ant to rotation, translation and scale changes. Peculiarly, our proposal allows for leveraging on the inherent strengths of descriptor-based approaches, i.e. robustness to occlu-sion and clutter and scalability with respect to the size of the model library, also when dealing with scarcely textured objects. 1
Recognizing and segmenting objects in clutter
AbstractWhen viewing a cluttered scene, observers may not be able to segment whole objects prior to recognition. Instead, they may segment and recognize these objects in a piecemeal way. Here we test whether observers can use the appearance of one object part to predict the location and appearance of other object parts. During several training sessions, observers studied an object against a blank background. They then viewed this object against a background of clutter that camouflaged some parts of the object while leaving other parts salient. The observer’s task was to find the camouflaged part. We varied the symmetry of the salient part with the expectation that as this symmetry decreased, the information about the camouflaged part’s location and appearance would increase and this would facilitate search. Our results suggest that observers can use the salient part to predict the location, but not the appearance, of the camouflaged part
Recommended from our members
Contour and texture for visual recognition of object categories
The recognition of categories of objects in images has become a central
topic in computer vision. Automatic visual recognition systems
are rapidly becoming central to applications such as image search,
robotics, vehicle safety systems, and image editing. This work addresses
three sub-problems of recognition: image classification, object
detection, and semantic segmentation. The task of classification
is to determine whether an object of a particular category is present
or not. Object detection aims to localize any objects of the category.
Semantic segmentation is a more complete image understanding,
whereby an image is partitioned into coherent regions that are assigned
meaningful class labels. This thesis proposes novel discriminative
learning approaches to these problems.
Our primary contributions are threefold. Firstly, we demonstrate
that the contours (the outline and interior edges) of an object are,
alone, sufficient for accurate visual recognition. Secondly, we propose
two powerful new feature types: (i) a learned codebook of contour
fragments matched with an improved oriented chamfer distance,
and (ii) a set of texture-based features that simultaneously exploit
local appearance, approximate shape, and appearance context.
The efficacy of these new features types is evaluated on a wide variety
of datasets. Thirdly, we show how, in combination, these two
largely orthogonal feature types can substantially improve recognition
performance above that achieved by either alone