3 research outputs found

    Using pixel-based and object-based methods to classify urban hyperspectral features

    Get PDF
    Object-based image analysis methods have been developed recently. They have since become a very active research topic in the remote sensing community. This is mainly because the researchers have begun to study the spatial structures within the data. In contrast, pixel-based methods only use the spectral content of data. To evaluate the applicability of object-based image analysis methods for land-cover information extraction from hyperspectral data, a comprehensive comparative analysis was performed. In this study, six supervised classification methods were selected from pixel-based category, including the maximum likelihood (ML), fisher linear likelihood (FLL), support vector machine (SVM), binary encoding (BE), spectral angle mapper (SAM) and spectral information divergence (SID). The classifiers were conducted on several features extracted from original spectral bands in order to avoid the problem of the Hughes phenomenon, and obtain a sufficient number of training samples. Three supervised and four unsupervised feature extraction methods were used. Pixel based classification was conducted in the first step of the proposed algorithm. The effective feature number (EFN) was then obtained. Image objects were thereafter created using the fractal net evolution approach (FNEA), the segmentation method implemented in eCognition software. Several experiments have been carried out to find the best segmentation parameters. The classification accuracy of these objects was compared with the accuracy of the pixel-based methods. In these experiments, the Pavia University Campus hyperspectral dataset was used. This dataset was collected by the ROSIS sensor over an urban area in Italy. The results reveal that when using any combination of feature extraction and classification methods, the performance of object-based methods was better than pixel-based ones. Furthermore the statistical analysis of results shows that on average, there is almost an 8 percent improvement in classification accuracy when we use the object-based methods

    A new deep learning approach for classification of hyperspectral images: feature and decision level fusion of spectral and spatial features in multiscale CNN

    No full text
    Classification is the main field of hyperspectral data processing. To date, many methods are introduced to increase the accuracy of image classification. In recent years, various convolutional neural network models are proposed for hyperspectral image classification. This study puts forward a multiscale structure of convolutional neural networks that use several patches of different sizes to extract complex spatial features. Due to spatial features' effectiveness in improving the classification accuracy of hyperspectral images, the proposed framework integrates spatial features of three methods; morphological profiles, Gabor filter, and local binary pattern with spectral features at both the feature-level and decision-level. The experiments on three hyperspectral images, Indian Pine, Pavia University, and NCALM demonstrate the proposed method's efficiency. The final results show that the proposed method's overall classification accuracy is 6% higher than some other recent techniques
    corecore