51 research outputs found
Hyperfeatures - Multilevel Local Coding for Visual Recognition
Histograms of local appearance descriptors are a popular representation for visual recognition. They are highly discriminant and they have good resistance to local occlusions and to geometric and photometric variations, but they are not able to exploit spatial co-occurrence statistics of features at scales larger than their local input patches. We present a new multilevel visual representation, `hyperfeatures', that is designed to remedy this. The basis of the work is the familiar notion that to detect object parts, in practice it often suffices to detect co-occurrences of more local object fragments a process that can be formalized as comparison (vector quantization) of image patches against a codebook of known fragments, followed by local aggregation of the resulting codebook membership vectors to detect co-occurrences. This process converts collections of local image descriptor vectors into slightly less local histogram vectors higher-level but spatially coarser descriptors. Our central observation is that it can therefore be iterated, and that doing so captures and codes ever larger assemblies of object parts and increasingly abstract or `semantic' image properties. This repeated nonlinear `folding' is essentially different from that of hierarchical models such as Convolutional Neural Networks and HMAX, being based on repeated comparison to local prototypes and accumulation of co-occurrence statistics rather than on repeated convolution and rectification. We formulate the hyperfeatures model and study its performance under several different image coding methods including clustering based Vector Quantization, Gaussian Mixtures, and combinations of these with Latent Discriminant Analysis. We find that the resulting high-level features provide improved performance in several object image and texture image classification tasks
Hyperfeatures - multilevel local coding for visual recognition
International audienceHistograms of local appearance descriptors are a popular representation for visual recognition. They are highly discriminant and have good resistance to local occlusions and to geometric and photometric variations, but they are not able to exploit spatial co-occurrence statistics at scales larger than their local input patches. We present a new multilevel visual representation, ‘hyperfeatures', that is designed to remedy this. The starting point is the familiar notion that to detect object parts, in practice it often suffices to detect co-occurrences of more local object fragments – a process that can be formalized as comparison (e.g. vector quantization) of image patches against a codebook of known fragments, followed by local aggregation of the resulting codebook membership vectors to detect co-occurrences. This process converts local collections of image descriptor vectors into somewhat less local histogram vectors – higher-level but spatially coarser descriptors. We observe that as the output is again a local descriptor vector, the process can be iterated, and that doing so captures and codes ever larger assemblies of object parts and increasingly abstract or ‘semantic' image properties. We formulate the hyperfeatures model and study its performance under several different image coding methods including clustering based Vector Quantization, Gaussian Mixtures, and combinations of these with Latent Dirichlet Allocation. We find that the resulting high-level features provide improved performance in several object image and texture image classification tasks
Rotation-Invariant Restricted Boltzmann Machine Using Shared Gradient Filters
Finding suitable features has been an essential problem in computer vision.
We focus on Restricted Boltzmann Machines (RBMs), which, despite their
versatility, cannot accommodate transformations that may occur in the scene. As
a result, several approaches have been proposed that consider a set of
transformations, which are used to either augment the training set or transform
the actual learned filters. In this paper, we propose the Explicit
Rotation-Invariant Restricted Boltzmann Machine, which exploits prior
information coming from the dominant orientation of images. Our model extends
the standard RBM, by adding a suitable number of weight matrices, associated
with each dominant gradient. We show that our approach is able to learn
rotation-invariant features, comparing it with the classic formulation of RBM
on the MNIST benchmark dataset. Overall, requiring less hidden units, our
method learns compact features, which are robust to rotations.Comment: 8 pages, 3 figures, 1 tabl
Compute Less to Get More: Using ORC to Improve Sparse Filtering
Sparse Filtering is a popular feature learning algorithm for image
classification pipelines. In this paper, we connect the performance of Sparse
Filtering with spectral properties of the corresponding feature matrices. This
connection provides new insights into Sparse Filtering; in particular, it
suggests early stopping of Sparse Filtering. We therefore introduce the Optimal
Roundness Criterion (ORC), a novel stopping criterion for Sparse Filtering. We
show that this stopping criterion is related with pre-processing procedures
such as Statistical Whitening and demonstrate that it can make image
classification with Sparse Filtering considerably faster and more accurate
Multi-Level Visual Alphabets
A central debate in visual perception theory is the argument for indirect versus direct perception; i.e., the use of intermediate, abstract, and hierarchical representations versus direct semantic interpretation of images through interaction with the outside world. We present a content-based representation that combines both approaches. The previously developed Visual Alphabet method is extended with a hierarchy of representations, each level feeding into the next one, but based on features that are not abstract but directly relevant to the task at hand. Explorative benchmark experiments are carried out on face images to investigate and explain the impact of the key parameters such as pattern size, number of prototypes, and distance measures used. Results show that adding an additional middle layer improves results, by encoding the spatial co-occurrence of lower-level pattern prototypes
Object Edge Contour Localisation Based on HexBinary Feature Matching
This paper addresses the issue of localising object
edge contours in cluttered backgrounds to support robotics
tasks such as grasping and manipulation and also to improve
the potential perceptual capabilities of robot vision systems. Our
approach is based on coarse-to-fine matching of a new recursively
constructed hierarchical, dense, edge-localised descriptor,
the HexBinary, based on the HexHog descriptor structure first
proposed in [1]. Since Binary String image descriptors [2]–
[5] require much lower computational resources, but provide
similar or even better matching performance than Histogram
of Orientated Gradient (HoG) descriptors, we have replaced
the HoG base descriptor fields used in HexHog with Binary
Strings generated from first and second order polar derivative
approximations. The ALOI [6] dataset is used to evaluate
the HexBinary descriptors which we demonstrate to achieve
a superior performance to that of HexHoG [1] for pose
refinement. The validation of our object contour localisation
system shows promising results with correctly labelling ~86% of edgel positions and mis-labelling ~3%
Human Pose Estimation from Monocular Images : a Comprehensive Survey
Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problema into several modules: feature extraction and description, human body models, and modelin methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used
- …