18,990 research outputs found

    Analysis of GLCM Parameters for Textures Classification on UMD Database Images

    Get PDF
    Texture analysis is one of the most important techniques that have been used in image processing for many purposes, including image classification. The texture determines the region of a given gray level image, and reflects its relevant information. Several methods of analysis have been invented and developed to deal with texture in recent years, and each one has its own method of extracting features from the texture. These methods can be divided into two main approaches: statistical methods and processing methods. Gray Level Co-occurrence Matrix (GLCM) is the most popular statistical method used to get features from the texture. In addition to GLCM, a number of equations of Haralick characteristics will be used to calculate values used as discriminate features among different images in this study. There are many parameters of GLCM that should be taken into consideration to increase the discrimination between images belonging to different classes. In this study, we aim to evaluate GLCM parameters. For three decades now, GLCM is popular method used for texture analysis. Neural network which is one of supervised methods will also be used as a classifier. And finally, the database for this study will be images prepared from UMD (University of Maryland database)

    GLCM-based chi-square histogram distance for automatic detection of defects on patterned textures

    Full text link
    Chi-square histogram distance is one of the distance measures that can be used to find dissimilarity between two histograms. Motivated by the fact that texture discrimination by human vision system is based on second-order statistics, we make use of histogram of gray-level co-occurrence matrix (GLCM) that is based on second-order statistics and propose a new machine vision algorithm for automatic defect detection on patterned textures. Input defective images are split into several periodic blocks and GLCMs are computed after quantizing the gray levels from 0-255 to 0-63 to keep the size of GLCM compact and to reduce computation time. Dissimilarity matrix derived from chi-square distances of the GLCMs is subjected to hierarchical clustering to automatically identify defective and defect-free blocks. Effectiveness of the proposed method is demonstrated through experiments on defective real-fabric images of 2 major wallpaper groups (pmm and p4m groups).Comment: IJCVR, Vol. 2, No. 4, 2011, pp. 302-31

    LANDSAT-D investigations in snow hydrology

    Get PDF
    Work undertaken during the contract and its results are described. Many of the results from this investigation are available in journal or conference proceedings literature - published, accepted for publication, or submitted for publication. For these the reference and the abstract are given. Those results that have not yet been submitted separately for publication are described in detail. Accomplishments during the contract period are summarized as follows: (1) analysis of the snow reflectance characteristics of the LANDSAT Thematic Mapper, including spectral suitability, dynamic range, and spectral resolution; (2) development of a variety of atmospheric models for use with LANDSAT Thematic Mapper data. These include a simple but fast two-stream approximation for inhomogeneous atmospheres over irregular surfaces, and a doubling model for calculation of the angular distribution of spectral radiance at any level in an plane-parallel atmosphere; (3) incorporation of digital elevation data into the atmospheric models and into the analysis of the satellite data; and (4) textural analysis of the spatial distribution of snow cover

    A dynamic texture based approach to recognition of facial actions and their temporal models

    Get PDF
    In this work, we propose a dynamic texture-based approach to the recognition of facial Action Units (AUs, atomic facial gestures) and their temporal models (i.e., sequences of temporal segments: neutral, onset, apex, and offset) in near-frontal-view face videos. Two approaches to modeling the dynamics and the appearance in the face region of an input video are compared: an extended version of Motion History Images and a novel method based on Nonrigid Registration using Free-Form Deformations (FFDs). The extracted motion representation is used to derive motion orientation histogram descriptors in both the spatial and temporal domain. Per AU, a combination of discriminative, frame-based GentleBoost ensemble learners and dynamic, generative Hidden Markov Models detects the presence of the AU in question and its temporal segments in an input image sequence. When tested for recognition of all 27 lower and upper face AUs, occurring alone or in combination in 264 sequences from the MMI facial expression database, the proposed method achieved an average event recognition accuracy of 89.2 percent for the MHI method and 94.3 percent for the FFD method. The generalization performance of the FFD method has been tested using the Cohn-Kanade database. Finally, we also explored the performance on spontaneous expressions in the Sensitive Artificial Listener data set

    The Measurement of Bone Quality in Medical Images Using Statistical Textural Features

    Get PDF
    Mineral density and bone architecture properties are the main measures of bone quality. Dual-energy X-ray Absorptiometry (DXA) is the traditional clinical measurement technique for bone mineral density, but it is insensitive to architectural information. Image analysis of the architectural properties of bones can be used to predict bone quality. This study is aimed at investigating the statistical parameters extracted from wo dimensional projection images of the DXA scans and exploring its link with architectural properties, and its correlation with a bones mechanical properties. In this research, features extracted from the Gray Level Co-occurrence Matrix (GLCM) for a 2D image are compared with features extracted from semivariogram analysis in order to estimate bone micro-architectural and mechanical properties. Data analysis was conducted on 13 trabecular bones of different strengths (with an in-plane spatial resolution of about 50ĂŽÂĽm). Ground truth data for bone volume fraction (BV/TV), bone strength and elasticity was available for the dataset, based on complex 3D analysis and mechanical tests. Correlation between the statistical parameters and biomechanical test results was studied using regression analysis. The results showed that the cluster-shade parameter extracted from the GLCM was strongly correlated with the microstructure of the trabecular bone and also somewhat related to the mechanical properties. Additionally, a parameter called \u27sill\u27 obtained by the semivariogram method was found to be highly associated with the mechanical properties of the bone and slightly related to its microarchitectural properties

    Visual and Contextual Modeling for the Detection of Repeated Mild Traumatic Brain Injury.

    Get PDF
    Currently, there is a lack of computational methods for the evaluation of mild traumatic brain injury (mTBI) from magnetic resonance imaging (MRI). Further, the development of automated analyses has been hindered by the subtle nature of mTBI abnormalities, which appear as low contrast MR regions. This paper proposes an approach that is able to detect mTBI lesions by combining both the high-level context and low-level visual information. The contextual model estimates the progression of the disease using subject information, such as the time since injury and the knowledge about the location of mTBI. The visual model utilizes texture features in MRI along with a probabilistic support vector machine to maximize the discrimination in unimodal MR images. These two models are fused to obtain a final estimate of the locations of the mTBI lesion. The models are tested using a novel rodent model of repeated mTBI dataset. The experimental results demonstrate that the fusion of both contextual and visual textural features outperforms other state-of-the-art approaches. Clinically, our approach has the potential to benefit both clinicians by speeding diagnosis and patients by improving clinical care

    Visual quality of printed surfaces: Study of homogeneity

    No full text
    International audienceThis paper introduces a homogeneity assessment method for the printed versions of uniform color images. This parameter has been specifically selected as one of the relevant attributes of printing quality. The method relies on image processing algorithms from a scanned image of the printed surface, especially the computation of gray level cooccurrence matrices and of objective homogeneity attribute inspired of Haralick's parameters. The viewing distance is also taken into account when computing the homogeneity index. Resizing and filtering of the scanned image are performed in order to keep the level of details visible by a standard human observer at short and long distances. The combination of the obtained homogeneity scores on both high and low resolution images provides a homogeneity index, which can be computed for any printed version of a uniform digital image. We tested the method on several hardcopies of a same image, and compared the scores to the empirical evaluations carried out by non-expert observers who were asked to sort the samples and to place them on a metric scale. Our experiments show a good matching between the sorting by the observers and the score computed by our algorith
    • …
    corecore