944 research outputs found

    Influence of color spaces over texture characterization

    Get PDF
    Images are generally represented in the RGB color space. This is the model commonly used for most cameras and for displaying on computer screens. Nevertheless, the representation of color images using this color space has some important drawbacks for image analysis. For example, it is a non-uniform space, that is, measured color differences are not proportional to the human perception of such differences. On the other hand, HSI color space is closer to the human color perception and CIE Lab color space has been defined to be approximately uniform. In this work, the influence of the color space for color texture characterization is studied by comparing Lab, HSI, and RGB color spaces. Their effectiveness is analyzed regarding their influence over two different texture characterization methods: DFT features and co-occurrence matrices. The results have shown that involving color information into texture analysis improves the characterization significantly. Moreover, Lab and HSI color spaces outperform RG

    Hierarchical indexing for region based image retrieval

    Get PDF
    Region-based image retrieval system has been an active research area. In this study we developed an improved region-based image retrieval system. The system applies image segmentation to divide an image into discrete regions, which if the segmentation is ideal, correspond to objects. The focus of this research is to improve the capture of regions so as to enhance indexing and retrieval performance and also to provide a better similarity distance computation. During image segmentation, we developed a modified k-means clustering algorithm for image retrieval where hierarchical clustering algorithm is used to generate the initial number of clusters and the cluster centers. In addition, to during similarity distance computation we introduced object weight based on object\u27s uniqueness. Therefore, objects that are not unique such as trees and skies will have less weight. The experimental evaluation is based on the same 1000 COREL color image database with the FuzzyClub, IRM and Geometric Histogram and the performance is compared between them. As compared with existing technique and systems, such as IRM, FuzzyClub, and Geometric Histogram, our study demonstrate the following unique advantages: (i) an improvement in image segmentation accuracy using the modified k-means algorithm (ii)an improvement in retrieval accuracy as a result of a better similarity distance computation that considers the importance and uniqueness of objects in an image

    Evaluating color texture descriptors under large variations of controlled lighting conditions

    Full text link
    The recognition of color texture under varying lighting conditions is still an open issue. Several features have been proposed for this purpose, ranging from traditional statistical descriptors to features extracted with neural networks. Still, it is not completely clear under what circumstances a feature performs better than the others. In this paper we report an extensive comparison of old and new texture features, with and without a color normalization step, with a particular focus on how they are affected by small and large variation in the lighting conditions. The evaluation is performed on a new texture database including 68 samples of raw food acquired under 46 conditions that present single and combined variations of light color, direction and intensity. The database allows to systematically investigate the robustness of texture descriptors across a large range of variations of imaging conditions.Comment: Submitted to the Journal of the Optical Society of America

    Interpretable Transformations with Encoder-Decoder Networks

    Full text link
    Deep feature spaces have the capacity to encode complex transformations of their input data. However, understanding the relative feature-space relationship between two transformed encoded images is difficult. For instance, what is the relative feature space relationship between two rotated images? What is decoded when we interpolate in feature space? Ideally, we want to disentangle confounding factors, such as pose, appearance, and illumination, from object identity. Disentangling these is difficult because they interact in very nonlinear ways. We propose a simple method to construct a deep feature space, with explicitly disentangled representations of several known transformations. A person or algorithm can then manipulate the disentangled representation, for example, to re-render an image with explicit control over parameterized degrees of freedom. The feature space is constructed using a transforming encoder-decoder network with a custom feature transform layer, acting on the hidden representations. We demonstrate the advantages of explicit disentangling on a variety of datasets and transformations, and as an aid for traditional tasks, such as classification.Comment: Accepted at ICCV 201

    Describing Textures in the Wild

    Get PDF
    Patterns and textures are defining characteristics of many natural objects: a shirt can be striped, the wings of a butterfly can be veined, and the skin of an animal can be scaly. Aiming at supporting this analytical dimension in image understanding, we address the challenging problem of describing textures with semantic attributes. We identify a rich vocabulary of forty-seven texture terms and use them to describe a large dataset of patterns collected in the wild.The resulting Describable Textures Dataset (DTD) is the basis to seek for the best texture representation for recognizing describable texture attributes in images. We port from object recognition to texture recognition the Improved Fisher Vector (IFV) and show that, surprisingly, it outperforms specialized texture descriptors not only on our problem, but also in established material recognition datasets. We also show that the describable attributes are excellent texture descriptors, transferring between datasets and tasks; in particular, combined with IFV, they significantly outperform the state-of-the-art by more than 8 percent on both FMD and KTHTIPS-2b benchmarks. We also demonstrate that they produce intuitive descriptions of materials and Internet images.Comment: 13 pages; 12 figures Fixed misplaced affiliatio

    Multilayer Complex Network Descriptors for Color-Texture Characterization

    Full text link
    A new method based on complex networks is proposed for color-texture analysis. The proposal consists on modeling the image as a multilayer complex network where each color channel is a layer, and each pixel (in each color channel) is represented as a network vertex. The network dynamic evolution is accessed using a set of modeling parameters (radii and thresholds), and new characterization techniques are introduced to capt information regarding within and between color channel spatial interaction. An automatic and adaptive approach for threshold selection is also proposed. We conduct classification experiments on 5 well-known datasets: Vistex, Usptex, Outex13, CURet and MBT. Results among various literature methods are compared, including deep convolutional neural networks with pre-trained architectures. The proposed method presented the highest overall performance over the 5 datasets, with 97.7 of mean accuracy against 97.0 achieved by the ResNet convolutional neural network with 50 layers.Comment: 20 pages, 7 figures and 4 table

    Color-to-Grayscale: Does the Method Matter in Image Recognition?

    Get PDF
    In image recognition it is often assumed the method used to convert color images to grayscale has little impact on recognition performance. We compare thirteen different grayscale algorithms with four types of image descriptors and demonstrate that this assumption is wrong: not all color-to-grayscale algorithms work equally well, even when using descriptors that are robust to changes in illumination. These methods are tested using a modern descriptor-based image recognition framework, on face, object, and texture datasets, with relatively few training instances. We identify a simple method that generally works best for face and object recognition, and two that work well for recognizing textures
    • …
    corecore