56,672 research outputs found

    Natural Scene Image Modeling using Color and Texture Visterms.

    Get PDF
    This paper presents a novel approach for visual scene representation, combining the use of quantized color and texture local invariant features (referred to here as {\em visterms}) computed over interest point regions. In particular we investigate the different ways to fuse together local information from texture and color in order to provide a better {\em visterm} representation. We develop and test our methods on the task of image classification using a 6-class natural scene database. We perform classification based on the {\em bag-of-visterms} (BOV) representation (histogram of quantized local descriptors), extracted from both texture and color features. We investigate two different fusion approaches at the feature level: fusing local descriptors together and creating one representation of joint texture-color visterms, or concatenating the histogram representation of both color and texture, obtained independently from each local feature. On our classification task we show that the appropriate use of color improves the results w.r.t. a texture only representation

    BoWFire: Detection of Fire in Still Images by Integrating Pixel Color and Texture Analysis

    Get PDF
    Emergency events involving fire are potentially harmful, demanding a fast and precise decision making. The use of crowdsourcing image and videos on crisis management systems can aid in these situations by providing more information than verbal/textual descriptions. Due to the usual high volume of data, automatic solutions need to discard non-relevant content without losing relevant information. There are several methods for fire detection on video using color-based models. However, they are not adequate for still image processing, because they can suffer on high false-positive results. These methods also suffer from parameters with little physical meaning, which makes fine tuning a difficult task. In this context, we propose a novel fire detection method for still images that uses classification based on color features combined with texture classification on superpixel regions. Our method uses a reduced number of parameters if compared to previous works, easing the process of fine tuning the method. Results show the effectiveness of our method of reducing false-positives while its precision remains compatible with the state-of-the-art methods.Comment: 8 pages, Proceedings of the 28th SIBGRAPI Conference on Graphics, Patterns and Images, IEEE Pres

    Implementasi Perbandingan Deteksi Tepi Pada Citra Digital Menggunakan Metode Roberst, Sobel, Prewitt dan Canny

    Get PDF
    The field of digital image processing, such as segmentation, has become a widely discussed topic. Segmentation aims to divide the image into parts or regions so that there is no overlap with similar characteristics, such as color, shape, texture, and intensity. The segmentation process is generally divided into three groups of segmentation, including segmentation based on classification (classification based segmentation), segmentation based on edges (edge based segmentation), and segmentation based on region (region based segmentation). Edge detection is a systematic process used to detect pixels in digital images that are not fixed or always changing their brightness level in a line or curve. The purpose of this study is to compare edge detection methods using image objects. This research was conducted using the method of Robert, Prewitt, Sobel and Canny to detect the number of white pixels in each image. The tool used in this research is Simulink Matlab, where the parameters of each algorithm will be compared. Then the total number of white pixels is calculated from each edge detection method

    Fast unsupervised multiresolution color image segmentation using adaptive gradient thresholding and progressive region growing

    Get PDF
    In this thesis, we propose a fast unsupervised multiresolution color image segmentation algorithm which takes advantage of gradient information in an adaptive and progressive framework. This gradient-based segmentation method is initialized by a vector gradient calculation on the full resolution input image in the CIE L*a*b* color space. The resultant edge map is used to adaptively generate thresholds for classifying regions of varying gradient densities at different levels of the input image pyramid, obtained through a dyadic wavelet decomposition scheme. At each level, the classification obtained by a progressively thresholded growth procedure is combined with an entropy-based texture model in a statistical merging procedure to obtain an interim segmentation. Utilizing an association of a gradient quantized confidence map and non-linear spatial filtering techniques, regions of high confidence are passed from one level to another until the full resolution segmentation is achieved. Evaluation of our results on several hundred images using the Normalized Probabilistic Rand (NPR) Index shows that our algorithm outperforms state-of the art segmentation techniques and is much more computationally efficient than its single scale counterpart, with comparable segmentation quality

    Perceptual-based textures for scene labeling: a bottom-up and a top-down approach

    Get PDF
    Due to the semantic gap, the automatic interpretation of digital images is a very challenging task. Both the segmentation and classification are intricate because of the high variation of the data. Therefore, the application of appropriate features is of utter importance. This paper presents biologically inspired texture features for material classification and interpreting outdoor scenery images. Experiments show that the presented texture features obtain the best classification results for material recognition compared to other well-known texture features, with an average classification rate of 93.0%. For scene analysis, both a bottom-up and top-down strategy are employed to bridge the semantic gap. At first, images are segmented into regions based on the perceptual texture and next, a semantic label is calculated for these regions. Since this emerging interpretation is still error prone, domain knowledge is ingested to achieve a more accurate description of the depicted scene. By applying both strategies, 91.9% of the pixels from outdoor scenery images obtained a correct label
    • …
    corecore