2,544 research outputs found
Recognition and Classification of Ancient Dwellings based on Elastic Grid and GLCM
Rectangle algorithm is designed to extract ancient dwellings from village satellite images according to their pixel features and shape features. For these unrecognized objects, we need to distinguish them by further extracting texture features of them. In order to get standardized sample, three pre-process operations including rotating operation, scaling operation, and clipping operation are designed to unify their sizes and directions
Texture features extraction based on GLCM for face retrieval system
Texture features play an important role in most image retrieval techniques to obtain results of high accuracy. In this work, the face image retrieval method considering texture analysis and statistical features has been proposed. Textile features can also be extracted using the GLCM tool. In this research, the GLCM calculation method involves two phases, first: some of the previous image processing techniques work together to get the best results to determine the big object of the face image (center of face image) then, the gray level co-occurrence matrix GLCM is computed for gray face image and then some statistical texture features with second-order are extracted. In the second phase, the facial texture features are retrieved by finding the minimum distance between texture features of an unknown face image with the texture features of face images that are stored in the database system. The experimental results show that the proposed method is capable to achieve high accuracy degree in face image retrieval
Feature Extraction of Images Texture Based on Co-occurrence Matrix
There are many techniques to extracted object properties in an image. In this research a co-occurrence matrix has been to adopted for feature extraction of English letters. English letters of size 14 and font time new roman have been stored as image, then preproced by apply truncation to take off all blank area, then filtered to make it noise free. Energy, contrast, correlation and homogeneity of the co-occurrence matrix properties for the stored character images were calculated. Another character models with different size and fonts were adopted to make the database able to cover a wide range of character images for character recognition and classification . Applied technique shows that a companation features can be extracted as new properties for letters images and give good results.
The experimental results of the proposed algorithm have proved that both energy and homogeneity features have given high recognition compared with the remaining other properties
Fusion of Heterogeneous Earth Observation Data for the Classification of Local Climate Zones
This paper proposes a novel framework for fusing multi-temporal,
multispectral satellite images and OpenStreetMap (OSM) data for the
classification of local climate zones (LCZs). Feature stacking is the most
commonly-used method of data fusion but does not consider the heterogeneity of
multimodal optical images and OSM data, which becomes its main drawback. The
proposed framework processes two data sources separately and then combines them
at the model level through two fusion models (the landuse fusion model and
building fusion model), which aim to fuse optical images with landuse and
buildings layers of OSM data, respectively. In addition, a new approach to
detecting building incompleteness of OSM data is proposed. The proposed
framework was trained and tested using data from the 2017 IEEE GRSS Data Fusion
Contest, and further validated on one additional test set containing test
samples which are manually labeled in Munich and New York. Experimental results
have indicated that compared to the feature stacking-based baseline framework
the proposed framework is effective in fusing optical images with OSM data for
the classification of LCZs with high generalization capability on a large
scale. The classification accuracy of the proposed framework outperforms the
baseline framework by more than 6% and 2%, while testing on the test set of
2017 IEEE GRSS Data Fusion Contest and the additional test set, respectively.
In addition, the proposed framework is less sensitive to spectral diversities
of optical satellite images and thus achieves more stable classification
performance than state-of-the art frameworks.Comment: accepted by TGR
Effective Method of Image Retrieval Using BTC with Gabor Wavelet Matrix
emergence of multimedia technology and the rapidly expanding image collections on the database have attracted significant research efforts in providing tools for effective retrieval and management of visual data. The need to find a desired image from a large collection. Image retrieval is the field of study concerned with searching and retrieving digital image from a collection of database .In real images, regions are often homogenous; neighboring pixels usually have similar properties (shape, color, texture). In this paper we proposed novel image retrieval based on Block Truncation Coding (BTC) with Gabor wavelet co-occurrence matrix. For image retrieval the features like shape, color, texture, spatial relation, and correlation and Eigen values are considered. BTC can be used for grayscale as well as for color images. The average precision and recall of all queries are computed and considered for performance analysis
Texture Features Extraction of Human Leather Ports Based on Histogram
Skin problems general are distinguished on healthy and unhealthy skin. Based on the pores, unhealthy skin: dry, moist or oily skin. Skin problems are identified from the image capture results. Skin image is processed using histogram method which aim to get skin type pattern. The study used 7 images classified by skin type, determined histogram, then extracted with features of average intensity, contrast, slope, energy, entropy and subtlety. Specified skin type reference as a skin test comparator. The histogram-based skin feature feature aims to determine the pattern of pore classification of human skin. The results of the 1, 2, 3 leaf image testing were lean to normal skin (43%), 4, 5, tends to dry skin (29%), 6.7 tend to oily skin (29%). Percentage of feature-based extraction of histogram in image processing reaches 90-95%
Object-Based Greenhouse Classification from GeoEye-1 and WorldView-2 Stereo Imagery
Remote sensing technologies have been commonly used to perform greenhouse detection and mapping. In this research, stereo pairs acquired by very high-resolution optical satellites GeoEye-1 (GE1) and WorldView-2 (WV2) have been utilized to carry out the land cover classification of an agricultural area through an object-based image analysis approach, paying special attention to greenhouses extraction. The main novelty of this work lies in the joint use of single-source stereo-photogrammetrically derived heights and multispectral information from both panchromatic and pan-sharpened orthoimages. The main features tested in this research can be grouped into different categories, such as basic spectral information, elevation data (normalized digital surface model; nDSM), band indexes and ratios, texture and shape geometry. Furthermore, spectral information was based on both single orthoimages and multiangle orthoimages. The overall accuracy attained by applying nearest neighbor and support vector machine classifiers to the four multispectral bands of GE1 were very similar to those computed from WV2, for either four or eight multispectral bands. Height data, in the form of nDSM, were the most important feature for greenhouse classification. The best overall accuracy values were close to 90%, and they were not improved by using multiangle orthoimages
JERS-1 SAR and LANDSAT-5 TM image data fusion: An application approach for lithological mapping
Satellite image data fusion is an image processing set of procedures utilise either for image optimisation for visual photointerpretation, or for automated thematic classification with low error rate and high accuracy. Lithological mapping using remote sensing image data relies on the spectral and textural information of the rock units of the area to be mapped. These pieces of information can be derived from Landsat optical TM and JERS-1 SAR images respectively. Prior to extracting such information (spectral and textural) and fusing them together, geometric image co-registration between TM and the SAR, atmospheric correction of the TM, and SAR despeckling are required. In this thesis, an appropriate atmospheric model is developed and implemented utilising the dark pixel subtraction method for atmospheric correction. For SAR despeckling, an efficient new method is also developed to test whether the SAR filter used remove the textural information or not. For image optimisation for visual photointerpretation, a new method of spectral coding of the six bands of the optical TM data is developed. The new spectral coding method is used to produce efficient colour composite with high separability between the spectral classes similar to that if the whole six optical TM bands are used together. This spectral coded colour composite is used as a spectral component, which is then fused with the textural component represented by the despeckled JERS-1 SAR using the fusion tools, including the colour transform and the PCT. The Grey Level Cooccurrence Matrix (GLCM) technique is used to build the textural data set using the speckle filtered JERS-1 SAR data making seven textural GLCM measures. For automated thematic mapping and by the use of both the six TM spectral data and the seven textural GLCM measures, a new method of classification has been developed using the Maximum Likelihood Classifier (MLC). The method is named the sequential maximum likelihood classification and works efficiently by comparison the classified textural pixels, the classified spectral pixels, and the classified textural-spectral pixels, and gives the means of utilising the textural and spectral information for automated lithological mapping
- …