186,803 research outputs found

    Combined statistical and model based texture features for improved image classification

    Get PDF
    This paper aims to improve the accuracy of texture classification based on extracting texture features using five different texture measures and classifying the patterns using a naive Bayesian classifier. Three statistical-based and two model-based methods are used to extract texture features from eight different texture images, then their accuracy is ranked after using each method individually and in pairs. The accuracy improved up to 97.01% when model based - Gaussian Markov random field (GMRF) and fractional Brownian motion (fBm) - were used together for classification as compared to the highest achieved using each of the five different methods alone; and proved to be better in classifying as compared to statistical methods. Also, using GMRF with statistical based methods, such as grey level co-occurrence (GLCM) and run-length (RLM) matrices, improved the overall accuracy to 96.94% and 96.55%; respectively

    Modern technologiesof restructured meat productes

    Get PDF
    Semi-variogram estimators and distortion measures of signal spectra are utilized in this paper for image texture retrieval. On the use of the complete Brodatz database, most high retrieval rates are reportedly based on multiple features, and the combinations of multiple algorithms; while the classification using single features is still a challenge to the retrieval of diverse texture images. The semi-variogram, which is theoretically sound and the cornerstone of spatial statistics, has the characteristics shared between true randomness and complete determinism; and therefore can be used as a useful tool for both structural and statistical analysis of texture images. Meanwhile, spectral distortion measures derived from the theory of linear predictive coding provide a rigorously mathematical model for signal-based similarity matching, and have been proven useful for many practical pattern classification systems. Experimental results obtained from testing the proposed approach using the complete Brodatz database, and the UIUC texture database suggest the effectiveness of the proposed approach as a single-feature-based dissimilarity measure for real-time texture retrieval

    Comparative performance analysis of texture characterization models in DIRSIG

    Get PDF
    The analysis and quantitative measurement of image texture is a complex and intriguing problem that has recently received a considerable amount of attention from the diverse fields of computer graphics, human vision, biomedical imaging, computer science, and remote sensing. In particular, textural feature quantification and extraction are crucial tasks for each of these disciplines, and as such numerous techniques have been developed in order to effectively segment or classify images based on textures, as well as for synthesizing textures. However, validation and performance analysis of these texture characterization models has been largely qualitative in nature based on conducting visual inspections of synthetic textures in order to judge the degree of similarity to the original sample texture imagery. In this work, four fundamentally different texture modeling algorithms have been implemented as necessary into the Digital Imaging and Remote Sensing Synthetic Image Generation (DIRSIG) model. Two of the models tested are variants of a statistical Z-Score selection model, while the remaining two involve a texture synthesis and a spectral end-member fractional abundance map approach, respectively. A detailed validation and comparative performance analysis of each model was then carried out on several texturally significant regions of two counterpart real and synthetic DIRSIG images which contain differing spatial and spectral resolutions. The quantitative assessment of each model utilized a set of four performance metrics that were derived from spatial Gray Level Co-occurrence Matrix (GLCM) analysis, hyperspectral Signal-to-Clutter Ratio (SCR) measures, mean filter (MF) spatial metrics, and a new concept termed the Spectral Co-Occurrence Matrix (SCM) metric which permits the simultaneous measurement of spatial and spectral texture. These performance measures in combination attempt to determine which texture characterization model best captures the correct statistical and radiometric attributes of the corresponding real image textures in both the spatial and spectral domains. The motivation for this work is to refine our understanding of the complexities of texture phenomena so that an optimal texture characterization model that can accurately account for these complexities can be eventually implemented into a synthetic image generation (SIG) model. Further, conclusions will be drawn regarding which of the existing texture models achieve realistic levels of spatial and spectral clutter, thereby permitting more effective and robust testing of hyperspectral algorithms in synthetic imagery

    A Self-Organizing Neural System for Learning to Recognize Textured Scenes

    Full text link
    A self-organizing ARTEX model is developed to categorize and classify textured image regions. ARTEX specializes the FACADE model of how the visual cortex sees, and the ART model of how temporal and prefrontal cortices interact with the hippocampal system to learn visual recognition categories and their names. FACADE processing generates a vector of boundary and surface properties, notably texture and brightness properties, by utilizing multi-scale filtering, competition, and diffusive filling-in. Its context-sensitive local measures of textured scenes can be used to recognize scenic properties that gradually change across space, as well a.s abrupt texture boundaries. ART incrementally learns recognition categories that classify FACADE output vectors, class names of these categories, and their probabilities. Top-down expectations within ART encode learned prototypes that pay attention to expected visual features. When novel visual information creates a poor match with the best existing category prototype, a memory search selects a new category with which classify the novel data. ARTEX is compared with psychophysical data, and is benchmarked on classification of natural textures and synthetic aperture radar images. It outperforms state-of-the-art systems that use rule-based, backpropagation, and K-nearest neighbor classifiers.Defense Advanced Research Projects Agency; Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657

    Fuzzy sets on 2D spaces for fineness representation

    Get PDF
    The analysis of the perceptual properties of texture plays a fundamental role in tasks like semantic description of images, content-based image retrieval using linguistic queries, or expert systems design based on low level visual features. In this paper, we propose a methodology to model texture properties by means of fuzzy sets defined on bidimensional spaces. In particular, we have focused our study on the fineness property that is considered as the most important feature for human visual interpretation. In our approach, pairwise combinations of fineness measures are used as a reference set, which allows to improve the ability to capture the presence of this property. To obtain the membership functions, we propose to learn the relationship between the computational values given by the measures and the human perception of fineness. The performance of each fuzzy set is analyzed and tested with the human assessments, allowing us to evaluate the goodness of each model and to identify the most suitable combination of measures for representing the fineness presence

    Ensemble Classifications of Wavelets based GLCM Texture Feature from MR Human Head Scan Brain Slices Analysis

    Get PDF
    This paper presents an automatic image analysis of multi-model views of MR brain using ensemble classifications of wavelets based texture feature. Primarily, an input MR image has pre-processed for an enhancement process. Then, the pre-processed image is decomposed into different frequency sub-band image using 2D stationary and discrete wavelet transform. The GLCM texture feature information is extracted from the above low-frequency sub band image of 2D discrete and stationary wavelet transform. The extracted texture features are given as an input to ensemble classifiers of Gentle Boost and Bagged Tree classifiers to recognize the appropriate image samples. Image abnormality has extracted from the recognized abnormal image samples of classifiers using multi-level Otsu thresholding. Finally, the performance of two ensemble classifiers performance has analyzed using sensitivity, specificity, accuracy, and MCC measures of two different wavelet based GLCM texture features. The resultant proposed feature extraction technique achieves the maximum level of accuracy is 90.70% with the fraction of 0.78 MCC value

    Performance Analysis of Improved Methodology for Incorporation of Spatial/Spectral Variability in Synthetic Hyperspectral Imagery

    Get PDF
    Synthetic imagery has traditionally been used to support sensor design by enabling design engineers to pre-evaluate image products during the design and development stages. Increasingly exploitation analysts are looking to synthetic imagery as a way to develop and test exploitation algorithms before image data are available from new sensors. Even when sensors are available, synthetic imagery can significantly aid in algorithm development by providing a wide range of ground truthed images with varying illumination, atmospheric, viewing and scene conditions. One limitation of synthetic data is that the background variability is often too bland. It does not exhibit the spatial and spectral variability present in real data. In this work, four fundamentally different texture modeling algorithms will first be implemented as necessary into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model environment. Two of the models to be tested are variants of a statistical Z-Score selection model, while the remaining two involve a texture synthesis and a spectral end-member fractional abundance map approach, respectively. A detailed comparative performance analysis of each model will then be carried out on several texturally significant regions of the resultant synthetic hyperspectral imagery. The quantitative assessment of each model will utilize a set of three performance metrics that have been derived from spatial Gray Level Co-Occunence Matrix (GLCM) analysis, hyperspectral Signalto- Clutter Ratio (5CR) measures, and a new concept termed the Spectral Co-Occurrence Matrix (SCM) metric which permits the simultaneous measurement of spatial and spectral texture. Previous research efforts on the validation and performance analysis of texture characterization models have been largely qualitative in nature based on conducting visual inspections of synthetic textures in order to judge the degree of similarity to the original sample texture imagery. The quantitative measures used in this study will in combination attempt to determine which texture characterization models best capture the correct statistical and radiometric attributes of the corresponding real image textures in both the spatial and spectral domains. The motivation for this work is to refine our understanding of the complexities of texture phenomena so that an optimal texture characterization model that can accurately account for these complexities can be eventually implemented into a synthetic image generation (SIG) model. Further, conclusions will be drawn regarding which of the candidate texture models are able to achieve realistic levels of spatial and spectral clutter, thereby permitting more effective and robust testing ofhyperspectral algorithms in synthetic imagery

    A similarity-based approach to perceptual feature validation

    Get PDF
    Which object properties matter most in human perception may well vary according to sensory modality, an important consideration for the design of multimodal interfaces. In this study, we present a similarity-based method for comparing the perceptual importance of object properties across modalities and show how it can also be used to perceptually validate computational measures of object properties. Similarity measures for a set of three-dimensional (3D) objects varying in shape and texture were gathered from humans in two modalities (vision and touch) and derived from a set of standard 2D and 3D computational measures (image and mesh subtraction, object perimeter, curvature, Gabor jet filter responses, and the Visual Difference Predictor (VDP)). Multidimensional scaling (MDS) was then performed on the similarity data to recover configurations of the stimuli in 2D perceptual/computational spaces. These two dimensions corresponded to the two dimensions of variation in the stimulus set: shape and texture. In the human visual space, shape strongly dominated texture. In the human haptic space, shape and texture were weighted roughly equally. Weights varied considerably across subjects in the haptic experiment, indicating that different strategies were used. Maps derived from shape-dominated computational measures provided good fits to the human visual map. No single computational measure provided a satisfactory fit to the map derived from mean human haptic data, though good fits were found for individual subjects; a combination of measures with individually-adjusted weights may be required to model the human haptic similarity judgments. Our method provides a high-level approach to perceptual validation, which can be applied in both unimodal and multimodal interface design
    corecore