5 research outputs found

    Overlap-based feature weighting: The feature extraction of Hyperspectral remote sensing imagery

    Get PDF
    Hyperspectral sensors provide a large number of spectral bands. This massive and complex data structure of hyperspectral images presents a challenge to traditional data processing techniques. Therefore, reducing the dimensionality of hyperspectral images without losing important information is a very important issue for the remote sensing community. We propose to use overlap-based feature weighting (OFW) for supervised feature extraction of hyperspectral data. In the OFW method, the feature vector of each pixel of hyperspectral image is divided to some segments. The weighted mean of adjacent spectral bands in each segment is calculated as an extracted feature. The less the overlap between classes is, the more the class discrimination ability will be. Therefore, the inverse of overlap between classes in each band (feature) is considered as a weight for that band. The superiority of OFW, in terms of classification accuracy and computation time, over other supervised feature extraction methods is established on three real hyperspectral images in the small sample size situation

    Feature reduction of hyperspectral images: Discriminant analysis and the first principal component

    Get PDF
    When the number of training samples is limited, feature reduction plays an important role in classification of hyperspectral images. In this paper, we propose a supervised feature extraction method based on discriminant analysis (DA) which uses the first principal component (PC1) to weight the scatter matrices. The proposed method, called DA-PC1, copes with the small sample size problem and has not the limitation of linear discriminant analysis (LDA) in the number of extracted features. In DA-PC1, the dominant structure of distribution is preserved by PC1 and the class separability is increased by DA. The experimental results show the good performance of DA-PC1 compared to some state-of-the-art feature extraction methods

    An evaluation of high-resolution land cover and land use classification accuracy by thematic, spatial, and algorithm parameters

    Get PDF
    High resolution land cover and land use classifications have applications in many fields of study such as land use and cover change, carbon storage measurements and environmental impact assessments. The wide range of available imagery at different spatial resolutions, potential thematic classes, and classification methods introduces the problem of understanding how each aspect affects accuracy. This study investigates how these three aspects affect the results of land cover classification. Results show that the maximum likelihood classifier was able to produce the most consistent results with the highest average accuracy (82.9%). Classifiers were able to identify a spatial resolution for each thematic resolution that achieved a distinctly higher overall accuracy. In addition, the effects of different land cover classifications as input to an object-based classification of land use at the parcel scale were evaluated. Results showed that land use classification requires higher resolution imagery to obtain satisfactory results than what is required for land cover classification. Also, the highest accuracy land cover classification did not produce the highest accuracy for land use, where a higher number of thematic classes performs better than fewer thematic classes. The highest accuracy LC classification by MLC with 8 classes occurred at 640 cm and achieved an overall accuracy of 83.3%. The highest accuracy LU classification was produced by the 80 cm LC with 8 classes and achieved an overall accuracy of 88.0%. Aside from the produced land cover and land use classifications, this study produces a lookup table in the form of multiple graphs for future research to reference when selecting imagery and determining thematic classes and classification methods

    Automated Remote Sensing Image Interpretation with Limited Labeled Training Data

    Get PDF
    Automated remote sensing image interpretation has been investigated for more than a decade. In early years, most work was based on the assumption that there are sufficient labeled samples to be used for training. However, ground-truth collection is a very tedious and time-consuming task and sometimes very expensive, especially in the field of remote sensing that usually relies on field surveys to collect ground truth. In recent years, as the development of advanced machine learning techniques, remote sensing image interpretation with limited ground-truth has caught the attention of researchers in the fields of both remote sensing and computer science. Three approaches that focus on different aspects of the interpretation process, i.e., feature extraction, classification, and segmentation, are proposed to deal with the limited ground truth problem. First, feature extraction techniques, which usually serve as a pre-processing step for remote sensing image classification are explored. Instead of only focusing on feature extraction, a joint feature extraction and classification framework is proposed based on ensemble local manifold learning. Second, classifiers in the case of limited labeled training data are investigated, and an enhanced ensemble learning method that outperforms state-of-the-art classification methods is proposed. Third, image segmentation techniques are investigated, with the aid of unlabeled samples and spatial information. A semi-supervised self-training method is proposed, which is capable of expanding the number of training samples by its own and hence improving classification performance iteratively. Experiments show that the proposed approaches outperform state-of-the-art techniques in terms of classification accuracy on benchmark remote sensing datasets.4 month
    corecore