597 research outputs found
Using basic image features for texture classification
Representing texture images statistically as histograms over a discrete vocabulary of local features has proven widely effective for texture classification tasks. Images are described locally by vectors of, for example, responses to some filter bank; and a visual vocabulary is defined as a partition of this descriptor-response space, typically based on clustering. In this paper, we investigate the performance of an approach which represents textures as histograms over a visual vocabulary which is defined geometrically, based on the Basic Image Features of Griffin and Lillholm (Proc. SPIE 6492(09):1-11, 2007), rather than by clustering. BIFs provide a natural mathematical quantisation of a filter-response space into qualitatively distinct types of local image structure. We also extend our approach to deal with intra-class variations in scale. Our algorithm is simple: there is no need for a pre-training step to learn a visual dictionary, as in methods based on clustering, and no tuning of parameters is required to deal with different datasets. We have tested our implementation on three popular and challenging texture datasets and find that it produces consistently good classification results on each, including what we believe to be the best reported for the KTH-TIPS and equal best reported for the UIUCTex databases
Nonparametric Bayesian Texture Learning and Synthesis
We present a nonparametric Bayesian method for texture learning and synthesis.
A texture image is represented by a 2D Hidden Markov Model (2DHMM) where
the hidden states correspond to the cluster labeling of textons and the transition
matrix encodes their spatial layout (the compatibility between adjacent textons).
The 2DHMM is coupled with the Hierarchical Dirichlet process (HDP) which allows
the number of textons and the complexity of transition matrix grow as the
input texture becomes irregular. The HDP makes use of Dirichlet process prior
which favors regular textures by penalizing the model complexity. This framework
(HDP-2DHMM) learns the texton vocabulary and their spatial layout jointly
and automatically. The HDP-2DHMM results in a compact representation of textures
which allows fast texture synthesis with comparable rendering quality over
the state-of-the-art patch-based rendering methods. We also show that the HDP-
2DHMM can be applied to perform image segmentation and synthesis. The preliminary
results suggest that HDP-2DHMM is generally useful for further applications
in low-level vision problems.United States. National Geospatial-Intelligence Agency (NEGI-1582-04- 0004)United States. Office of Naval Research (MURI Grant N00014-06-1-0734)United States. Advanced Research Projects Agency-Energy (VACE-II)Microsoft ResearchGoogle (Firm
A survey of exemplar-based texture synthesis
Exemplar-based texture synthesis is the process of generating, from an input
sample, new texture images of arbitrary size and which are perceptually
equivalent to the sample. The two main approaches are statistics-based methods
and patch re-arrangement methods. In the first class, a texture is
characterized by a statistical signature; then, a random sampling conditioned
to this signature produces genuinely different texture images. The second class
boils down to a clever "copy-paste" procedure, which stitches together large
regions of the sample. Hybrid methods try to combine ideas from both approaches
to avoid their hurdles. The recent approaches using convolutional neural
networks fit to this classification, some being statistical and others
performing patch re-arrangement in the feature space. They produce impressive
synthesis on various kinds of textures. Nevertheless, we found that most real
textures are organized at multiple scales, with global structures revealed at
coarse scales and highly varying details at finer ones. Thus, when confronted
with large natural images of textures the results of state-of-the-art methods
degrade rapidly, and the problem of modeling them remains wide open.Comment: v2: Added comments and typos fixes. New section added to describe
FRAME. New method presented: CNNMR
Reconstructive Sparse Code Transfer for Contour Detection and Semantic Labeling
We frame the task of predicting a semantic labeling as a sparse
reconstruction procedure that applies a target-specific learned transfer
function to a generic deep sparse code representation of an image. This
strategy partitions training into two distinct stages. First, in an
unsupervised manner, we learn a set of generic dictionaries optimized for
sparse coding of image patches. We train a multilayer representation via
recursive sparse dictionary learning on pooled codes output by earlier layers.
Second, we encode all training images with the generic dictionaries and learn a
transfer function that optimizes reconstruction of patches extracted from
annotated ground-truth given the sparse codes of their corresponding image
patches. At test time, we encode a novel image using the generic dictionaries
and then reconstruct using the transfer function. The output reconstruction is
a semantic labeling of the test image.
Applying this strategy to the task of contour detection, we demonstrate
performance competitive with state-of-the-art systems. Unlike almost all prior
work, our approach obviates the need for any form of hand-designed features or
filters. To illustrate general applicability, we also show initial results on
semantic part labeling of human faces.
The effectiveness of our approach opens new avenues for research on deep
sparse representations. Our classifiers utilize this representation in a novel
manner. Rather than acting on nodes in the deepest layer, they attach to nodes
along a slice through multiple layers of the network in order to make
predictions about local patches. Our flexible combination of a generatively
learned sparse representation with discriminatively trained transfer
classifiers extends the notion of sparse reconstruction to encompass arbitrary
semantic labeling tasks.Comment: to appear in Asian Conference on Computer Vision (ACCV), 201
Discrimination of Textures Using Texton Patterns
Textural patterns can often be used to recognize familiar objects in an image or retrieve images with similar texture from a database. Texture patterns can provide significant and abundance of texture and shape information. One of the recent significant and important texture features called Texton represents the various patterns of image which is useful in texture analysis. The present paper is an extension of our previous paper [1]. The present paper divides the 3 D7; 3 neighbourhood into two different 2 D7; 2 neighbourhood grids each consist four pixels. On this 2 D7; 2 grids shape descriptor indexes (SDI) are evaluated separately and added to form a Total Shape Descriptor Index Image (TSDI). By deriving textons on TSDI image Total Texton Shape Matrix (TTSM) image is formed and Grey Level Co-Occurence Matrix (GLCM) parameters are derived on it for efficient texture discrimination. The experimental result shows the efficacy of the present metho
- ā¦