4 research outputs found

    A relative evaluation of multi-class image classification by support vector machines

    Get PDF
    Support vector machines (SVM) have considerable potential as classifiers of remotely sensed data. A constraint on their application in remote sensing has been their binary nature, requiring multi-class classifications to be based upon a large number of binary analyses. Here, an approach for multi-class classification of airborne sensor data by a single SVM analysis is evaluated against a series of classifiers that are widely used in remote sensing, with particular regard to the effect of training set size on classification accuracy. In addition to the SVM, the same data sets were classified using a discriminant analysis, decision tree and multilayer perceptron neural network. The accuracy statements of the classifications derived from the different classifiers were compared in a statistically rigorous fashion that accommodated for the related nature of the samples used in the analyses. For each classification technique, accuracy was positively related with the size of the training set. In general, the most accurate classifications were derived from the SVM approach, and with the largest training set the SVM classification was significantly (p90% correct, the classifiers differed in the ability to correctly label individual cases and so may be suitable candidates for an ensemble based approach to classification

    An Analysis of multimodal sensor fusion for target detection in an urban environment

    Get PDF
    This work makes a compelling case for simulation as an attractive tool in designing cutting-edge remote sensing systems to generate the sheer volume of data required for a reasonable trade study. The generalized approach presented here allows multimodal system designers to tailor target and sensor parameters for their particular scenarios of interest via synthetic image generation tools, ensuring that resources are best allocated while sensors are still in the design phase. Additionally, sensor operators can use the customizable process showcased here to optimize image collection parameters for existing sensors. In the remote sensing community, polarimetric capabilities are often seen as a tool without a widely accepted mission. This study proposes incorporating a polarimetric and spectral sensor in a multimodal architecture to improve target detection performance in an urban environment. Two novel multimodal fusion algorithms are proposed--one for the pixel level, and another for the decision level. A synthetic urban scene is rendered for 355 unique combinations of illumination condition and sensor viewing geometry with the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model, and then validated to ensure the presence of enough background clutter. The utility of polarimetric information is shown to vary with the sun-target-sensor geometry, and the decision fusion algorithm is shown to generally outperform the pixel fusion algorithm. The results essentially suggest that polarimetric information may be leveraged to restore the capabilities of a spectral sensor if forced to image under less than ideal circumstances

    High resolution urban monitoring using neural network and transform algorithms

    Get PDF
    The advent of new high spatial resolution optical satellite imagery has greatly increased our ability to monitor land cover from space. Satellite observations are carried out regularly and continuously and provide a great deal of information on land cover over large areas. High spatial resolution imagery makes it possible to overcome the “mixed-pixel” problem inherent in more moderate resolution satellite sensors. At the same time, high-resolution images present a new challenge over other satellite systems since a relatively large amount of data must be analyzed, processed, and classified in order to characterize land cover features and to produce classification maps. Actually, in spite of the great potential of remote sensing as a source of information on land cover and the long history of research devoted to the extraction of land cover information from remotely sensed imagery, many problems have been encountered, and the accuracy of land cover maps derived from remotely sensed imagery has often been viewed as too low for operational users. This study focuses on high resolution urban monitoring using Neural Network (NN) analyses for land cover classification and change detection, and Fast Fourier Transform (FFT) evaluations of wavenumber spectra to characterize the spatial scales of land cover features. The contributions of the present work include: classification and change detection for urban areas using NN algorithms and multi-temporal very high resolution multi-spectral images (QuickBird, Digital Globe Co.); development and implementation of neural networks apt to classify a variety of multi-spectral images of cities arbitrarily located in the world; use of different wavenumber spectra produced by two-dimensional FFTs to understand the origin of significant features in the images of different urban environments subject to the subsequent classification; optimization of the neural net topology to classify urban environments, to produce thematic maps, and to analyze the urbanization processes. This work can considered as a first step in demonstrating how NN and FFT algorithms can contribute to the development of Image Information Mining (IMM) in Earth Observation
    corecore