1,986 research outputs found

    A Study of Feature Extraction Using Divergence Analysis of Texture Features

    Get PDF
    An empirical study of texture analysis for feature extraction and classification of high spatial resolution remotely sensed imagery (10 meters) is presented in terms of specific land cover types. The principal method examined is the use of spatial gray tone dependence (SGTD). The SGTD method reduces the gray levels within a moving window into a two-dimensional spatial gray tone dependence matrix which can be interpreted as a probability matrix of gray tone pairs. Haralick et al (1973) used a number of information theory measures to extract texture features from these matrices, including angular second moment (inertia), correlation, entropy, homogeneity, and energy. The derivation of the SGTD matrix is a function of: (1) the number of gray tones in an image; (2) the angle along which the frequency of SGTD is calculated; (3) the size of the moving window; and (4) the distance between gray tone pairs. The first three parameters were varied and tested on a 10 meter resolution panchromatic image of Maryville, Tennessee using the five SGTD measures. A transformed divergence measure was used to determine the statistical separability between four land cover categories forest, new residential, old residential, and industrial for each variation in texture parameters

    Topology, homogeneity and scale factors for object detection: application of eCognition software for urban mapping using multispectral satellite image

    Full text link
    The research scope of this paper is to apply spatial object based image analysis (OBIA) method for processing panchromatic multispectral image covering study area of Brussels for urban mapping. The aim is to map different land cover types and more specifically, built-up areas from the very high resolution (VHR) satellite image using OBIA approach. A case study covers urban landscapes in the eastern areas of the city of Brussels, Belgium. Technically, this research was performed in eCognition raster processing software demonstrating excellent results of image segmentation and classification. The tools embedded in eCognition enabled to perform image segmentation and objects classification processes in a semi-automated regime, which is useful for the city planning, spatial analysis and urban growth analysis. The combination of the OBIA method together with technical tools of the eCognition demonstrated applicability of this method for urban mapping in densely populated areas, e.g. in megapolis and capital cities. The methodology included multiresolution segmentation and classification of the created objects.Comment: 6 pages, 12 figures, INSO2015, Ed. by A. Girgvliani et al. Akaki Tsereteli State University, Kutaisi (Imereti), Georgi

    Textural-Contextual Labeling and Metadata Generation for Remote Sensing Applications

    Get PDF
    Despite the extensive research and the advent of several new information technologies in the last three decades, machine labeling of ground categories using remotely sensed data has not become a routine process. Considerable amount of human intervention is needed to achieve a level of acceptable labeling accuracy. A number of fundamental reasons may explain why machine labeling has not become automatic. In addition, there may be shortcomings in the methodology for labeling ground categories. The spatial information of a pixel, whether textural or contextual, relates a pixel to its surroundings. This information should be utilized to improve the performance of machine labeling of ground categories. Landsat-4 Thematic Mapper (TM) data taken in July 1982 over an area in the vicinity of Washington, D.C. are used in this study. On-line texture extraction by neural networks may not be the most efficient way to incorporate textural information into the labeling process. Texture features are pre-computed from cooccurrence matrices and then combined with a pixel's spectral and contextual information as the input to a neural network. The improvement in labeling accuracy with spatial information included is significant. The prospect of automatic generation of metadata consisting of ground categories, textural and contextual information is discussed

    Applicability of Artificial Neural Network for Automatic Crop Type Classification on UAV-Based Images

    Get PDF
    Recent advances in optical remote sensing, especially with the development of machine learning models have made it possible to automatically classify different crop types based on their unique spectral characteristics. In this article, a simple feed-forward artificial neural network (ANN) was implemented for the automatic classification of various crop types. A DJI Mavic air drone was used to simultaneously collect about 549 images of a mixed-crop farmland belonging to Federal University of Technology Minna, Nigeria. The images were annotated and the ANN algorithm was implemented using custom-designed Python programming scripts with libraries such as NumPy, Label box, and Segmentation Mask, for the classification. The algorithm was designed to automatically classify maize, rice, soya beans, groundnut, yam and a non-crop feature into different land spectral classes. The model training performance, using 70% of the dataset, shows that the loss curve flattened down with minimal over-fitting, showing that the model was improving as it trained. Finally, the accuracy of the automatic crop-type classification was evaluated with the aid of the recorded loss function and confusion matrix, and the result shows that the implemented ANN gave an overall training classification accuracy of 87.7% from the model and an overall accuracy of 0.9393 as computed from the confusion matrix, which attests to the robustness of ANN when implemented on high-resolution image data for automatic classification of crop types in a mixed farmland. The overall accuracy, including the user accuracy, proved that only a few images were incorrectly classified, which demonstrated that the errors of omission and commission were minimal

    Slum mapping : a comparison of single class learning and expert system object-oriented classification for mapping slum settlements in Addis Ababa city, Ethiopia

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the degree of Master of Science in Geospatial TechnologiesUpdated spatial information on the dynamics of slums can be helpful to measure and evaluate the progress of urban upgrading projects and policies. Earlier studies have shown that remote sensing techniques, with the help of very-high resolution imagery, can play a significant role in detecting slums, and providing timely spatial information. The main objective of this thesis is to develop a reliable object-oriented slum identification technique that enables the provision of timely spatial information about slum settlements in Addis Ababa city. It compares the one-class support vector machines algorithm with the expert defined classification rule set in the discrimination of slums, using GeoEye-1 imagery. Two different approaches, called manual and automatic fine-tuning, were deployed to determine the best value of parameters in one-class support vector machines algorithm. The manual fine-tuning of the parameters is done using extensive manual trial. The automatic tuning is done using cross-validation grid search with the overall accuracy as the performance metric. Two regions of study were defined with different landscape compositions, providing different classification scenarios to compare the classification approaches. After image segmentation, twenty predictive variables were computed to characterize the objects in both study areas. An image analyst collected one hundred sample objects of a slum to be used as training for the single-class learner. In parallel, an image analyst has defined a hierarchical rule set to discriminate the class of interest. Results in both study areas indicate that the one-class support vector machine with manual tuning yields higher overall accuracy (97.7% in subset 1, and 92% in subset 2) and requiring much less application effort and computing time than the expert system

    Fine-Grained Object Recognition and Zero-Shot Learning in Remote Sensing Imagery

    Full text link
    Fine-grained object recognition that aims to identify the type of an object among a large number of subcategories is an emerging application with the increasing resolution that exposes new details in image data. Traditional fully supervised algorithms fail to handle this problem where there is low between-class variance and high within-class variance for the classes of interest with small sample sizes. We study an even more extreme scenario named zero-shot learning (ZSL) in which no training example exists for some of the classes. ZSL aims to build a recognition model for new unseen categories by relating them to seen classes that were previously learned. We establish this relation by learning a compatibility function between image features extracted via a convolutional neural network and auxiliary information that describes the semantics of the classes of interest by using training samples from the seen classes. Then, we show how knowledge transfer can be performed for the unseen classes by maximizing this function during inference. We introduce a new data set that contains 40 different types of street trees in 1-ft spatial resolution aerial data, and evaluate the performance of this model with manually annotated attributes, a natural language model, and a scientific taxonomy as auxiliary information. The experiments show that the proposed model achieves 14.3% recognition accuracy for the classes with no training examples, which is significantly better than a random guess accuracy of 6.3% for 16 test classes, and three other ZSL algorithms.Comment: G. Sumbul, R. G. Cinbis, S. Aksoy, "Fine-Grained Object Recognition and Zero-Shot Learning in Remote Sensing Imagery", IEEE Transactions on Geoscience and Remote Sensing (TGRS), in press, 201

    Image fusion techniqes for remote sensing applications

    Get PDF
    Image fusion refers to the acquisition, processing and synergistic combination of information provided by various sensors or by the same sensor in many measuring contexts. The aim of this survey paper is to describe three typical applications of data fusion in remote sensing. The first study case considers the problem of the Synthetic Aperture Radar (SAR) Interferometry, where a pair of antennas are used to obtain an elevation map of the observed scene; the second one refers to the fusion of multisensor and multitemporal (Landsat Thematic Mapper and SAR) images of the same site acquired at different times, by using neural networks; the third one presents a processor to fuse multifrequency, multipolarization and mutiresolution SAR images, based on wavelet transform and multiscale Kalman filter. Each study case presents also results achieved by the proposed techniques applied to real data

    Study of USGS/NASA land use classification system

    Get PDF
    It is known from several previous investigations that many categories of land-use can be mapped via computer processing of Earth Resources Technology Satellite data. The results are presented of one such experiment using the USGS/NASA land-use classification system. Douglas County, Georgia, was chosen as the test site for this project. It was chosen primarily because of its recent rapid growth and future growth potential. Results of the investigation indicate an overall land-use mapping accuracy of 67% with higher accuracies in rural areas and lower accuracies in urban areas. It is estimated, however, that 95% of the State of Georgia could be mapped by these techniques with an accuracy of 80% to 90%

    Basic research planning in mathematical pattern recognition and image analysis

    Get PDF
    Fundamental problems encountered while attempting to develop automated techniques for applications of remote sensing are discussed under the following categories: (1) geometric and radiometric preprocessing; (2) spatial, spectral, temporal, syntactic, and ancillary digital image representation; (3) image partitioning, proportion estimation, and error models in object scene interference; (4) parallel processing and image data structures; and (5) continuing studies in polarization; computer architectures and parallel processing; and the applicability of "expert systems" to interactive analysis
    • …
    corecore