41 research outputs found

    Linear vs Nonlinear Extreme Learning Machine for Spectral-Spatial Classification of Hyperspectral Image

    Get PDF
    As a new machine learning approach, extreme learning machine (ELM) has received wide attentions due to its good performances. However, when directly applied to the hyperspectral image (HSI) classification, the recognition rate is too low. This is because ELM does not use the spatial information which is very important for HSI classification. In view of this, this paper proposes a new framework for spectral-spatial classification of HSI by combining ELM with loopy belief propagation (LBP). The original ELM is linear, and the nonlinear ELMs (or Kernel ELMs) are the improvement of linear ELM (LELM). However, based on lots of experiments and analysis, we found out that the LELM is a better choice than nonlinear ELM for spectral-spatial classification of HSI. Furthermore, we exploit the marginal probability distribution that uses the whole information in the HSI and learn such distribution using the LBP. The proposed method not only maintain the fast speed of ELM, but also greatly improves the accuracy of classification. The experimental results in the well-known HSI data sets, Indian Pines and Pavia University, demonstrate the good performances of the proposed method.Comment: 13 pages,8 figures,3 tables,articl

    Feature Selection Based on Hybridization of Genetic Algorithm and Particle Swarm Optimization

    Get PDF
    A new feature selection approach that is based on the integration of a genetic algorithm and particle swarm optimization is proposed. The overall accuracy of a support vector machine classifier on validation samples is used as a fitness value. The new approach is carried out on the well-known Indian Pines hyperspectral data set. Results confirm that the new approach is able to automatically select the most informative features in terms of classification accuracy within an acceptable CPU processing time without requiring the number of desired features to be set a priori by users. Furthermore, the usefulness of the proposed method is also tested for road detection. Results confirm that the proposed method is capable of discriminating between road and background pixels and performs better than the other approaches used for comparison in terms of performance metrics.Rannís; Rannsóknarnámssjóður / The Icelandic Research Fund for Graduate Students.PostPrin

    Towards an automated monitoring of human settlements in South Africa using high resolution SPOT satellite imagery

    Get PDF
    Urban areas in sub-Saharan Africa are growing at an unprecedented pace. Much of this growth is taking place in informal settlements. In South Africa more than 10% of the population live in urban informal settlements. South Africa has established a National Informal Settlement Development Programme (NUSP) to respond to these challenges. This programme is designed to support the National Department of Human Settlement (NDHS) in its implementation of the Upgrading Informal Settlements Programme (UISP) with the objective of eventually upgrading all informal settlements in the country. Currently, the NDHS does not have access to an updated national dataset captured at the same scale using source data that can be used to understand the status of informal settlements in the country. This pilot study is developing a fully automated workflow for the wall-to-wall processing of SPOT-5 satellite imagery of South Africa. The workflow includes an automatic image information extraction based on multiscale textural and morphological image features extraction. The advanced image feature compression and optimization together with innovative learning and classification techniques allow a processing of the SPOT-5 images using the Landsat-based National Land Cover (NLC) of South Africa from the year 2000 as low-resolution thematic reference layers as. The workflow was tested on 42 SPOT scenes based on a stratified sampling. The derived building information was validated against a visually interpreted building point data set and produced an accuracy of 97 per cent. Given this positive result, is planned to process the most recent wall-to-wall coverage as well as the archived imagery available since 2007 in the near future.JRC.G.2-Global security and crisis managemen

    Learning Aerial Image Segmentation from Online Maps

    Get PDF
    This study deals with semantic segmentation of high-resolution (aerial) images where a semantic class label is assigned to each pixel via supervised classification as a basis for automatic map generation. Recently, deep convolutional neural networks (CNNs) have shown impressive performance and have quickly become the de-facto standard for semantic segmentation, with the added benefit that task-specific feature design is no longer necessary. However, a major downside of deep learning methods is that they are extremely data-hungry, thus aggravating the perennial bottleneck of supervised classification, to obtain enough annotated training data. On the other hand, it has been observed that they are rather robust against noise in the training labels. This opens up the intriguing possibility to avoid annotating huge amounts of training data, and instead train the classifier from existing legacy data or crowd-sourced maps which can exhibit high levels of noise. The question addressed in this paper is: can training with large-scale, publicly available labels replace a substantial part of the manual labeling effort and still achieve sufficient performance? Such data will inevitably contain a significant portion of errors, but in return virtually unlimited quantities of it are available in larger parts of the world. We adapt a state-of-the-art CNN architecture for semantic segmentation of buildings and roads in aerial images, and compare its performance when using different training data sets, ranging from manually labeled, pixel-accurate ground truth of the same city to automatic training data derived from OpenStreetMap data from distant locations. We report our results that indicate that satisfying performance can be obtained with significantly less manual annotation effort, by exploiting noisy large-scale training data.Comment: Published in IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSIN
    corecore