5 research outputs found

    A new feature extraction approach based on non linear source separation

    Get PDF
    A new feature extraction approach is proposed in this paper to improve the classification performance in remotely sensed data. The proposed method is based on a primary sources subset (PSS) obtained by nonlinear transform that provides lower space for land pattern recognition. First, the underlying sources are approximated using multilayer neural networks. Given that, Bayesian inferences update unknown sources’ knowledge and model parameters with information’s data. Then, a source dimension minimizing technique is adopted to provide more efficient land cover description. The support vector machine (SVM) scheme is developed by using feature extraction. The experimental results on real multispectral imagery demonstrates that the proposed approach ensures efficient feature extraction by using several descriptors for texture identification and multiscale analysis. In a pixel based approach, the reduced PSS space improved the overall classification accuracy by 13% and reaches 82%. Using texture and multi resolution descriptors, the overall accuracy is 75.87% for the original observations, while using the reduced source space the overall accuracy reaches 81.67% when using jointly wavelet and Gabor transform and 86.67% when using Gabor transform. Thus, the source space enhanced the feature extraction process and allow more land use discrimination than the multispectral observations

    Dimension Reduction of Optical Remote Sensing Images via Minimum Change Rate Deviation Method

    No full text

    Bayesian Fusion of Multi-Band Images -Complementary results and supporting materials

    Get PDF
    Abstract In this paper, a Bayesian fusion technique for remotely sensed multi-band images is presented. The observed images are related to the high spectral and high spatial resolution image to be recovered through physical degradations, e.g., spatial and spectral blurring and/or subsampling defined by the sensor characteristics. The fusion problem is formulated within a Bayesian estimation framework. An appropriate prior distribution exploiting geometrical consideration is introduced. To compute the Bayesian estimator of the scene of interest from its posterior distribution, a Markov chain Monte Carlo algorithm is designed to generate samples asymptotically distributed according to the target distribution. To efficiently sample from this high-dimension distribution, a Hamiltonian Monte Carlo step is introduced in the Gibbs sampling strategy. The efficiency of the proposed fusion method is evaluated with respect to several state-of-the-art fusion techniques. In particular, low spatial resolution hyperspectral and multispectral images are fused to produce a high spatial resolution hyperspectral image. Index Terms Part of this work has been supported by the Hypanema ANR Project

    Feature extraction and classification for hyperspectral remote sensing images

    Get PDF
    Recent advances in sensor technology have led to an increased availability of hyperspectral remote sensing data at very high both spectral and spatial resolutions. Many techniques are developed to explore the spectral information and the spatial information of these data. In particular, feature extraction (FE) aimed at reducing the dimensionality of hyperspectral data while keeping as much spectral information as possible is one of methods to preserve the spectral information, while morphological profile analysis is the most popular methods used to explore the spatial information. Hyperspectral sensors collect information as a set of images represented by hundreds of spectral bands. While offering much richer spectral information than regular RGB and multispectral images, the high dimensional hyperspectal data creates also a challenge for traditional spectral data processing techniques. Conventional classification methods perform poorly on hyperspectral data due to the curse of dimensionality (i.e. the Hughes phenomenon: for a limited number of training samples, the classification accuracy decreases as the dimension increases). Classification techniques in pattern recognition typically assume that there are enough training samples available to obtain reasonably accurate class descriptions in quantitative form. However, the assumption that enough training samples are available to accurately estimate the class description is frequently not satisfied for hyperspectral remote sensing data classification, because the cost of collecting ground-truth of observed data can be considerably difficult and expensive. In contrast, techniques making accurate estimation by using only small training samples can save time and cost considerably. The small sample size problem therefore becomes a very important issue for hyperspectral image classification. Very high-resolution remotely sensed images from urban areas have recently become available. The classification of such images is challenging because urban areas often comprise a large number of different surface materials, and consequently the heterogeneity of urban images is relatively high. Moreover, different information classes can be made up of spectrally similar surface materials. Therefore, it is important to combine spectral and spatial information to improve the classification accuracy. In particular, morphological profile analysis is one of the most popular methods to explore the spatial information of the high resolution remote sensing data. When using morphological profiles (MPs) to explore the spatial information for the classification of hyperspectral data, one should consider three important issues. Firstly, classical morphological openings and closings degrade the object boundaries and deform the object shapes, while the morphological profile by reconstruction leads to some unexpected and undesirable results (e.g. over-reconstruction). Secondly, the generated MPs produce high-dimensional data, which may contain redundant information and create a new challenge for conventional classification methods, especially for the classifiers which are not robust to the Hughes phenomenon. Last but not least, linear features, which are used to construct MPs, lose too much spectral information when extracted from the original hyperspectral data. In order to overcome these problems and improve the classification results, we develop effective feature extraction algorithms and combine morphological features for the classification of hyperspectral remote sensing data. The contributions of this thesis are as follows. As the first contribution of this thesis, a novel semi-supervised local discriminant analysis (SELD) method is proposed for feature extraction in hyperspectral remote sensing imagery, with improved performance in both ill-posed and poor-posed conditions. The proposed method combines unsupervised methods (Local Linear Feature Extraction Methods (LLFE)) and supervised method (Linear Discriminant Analysis (LDA)) in a novel framework without any free parameters. The underlying idea is to design an optimal projection matrix, which preserves the local neighborhood information inferred from unlabeled samples, while simultaneously maximizing the class discrimination of the data inferred from the labeled samples. Our second contribution is the application of morphological profiles with partial reconstruction to explore the spatial information in hyperspectral remote sensing data from the urban areas. Classical morphological openings and closings degrade the object boundaries and deform the object shapes. Morphological openings and closings by reconstruction can avoid this problem, but this process leads to some undesirable effects. Objects expected to disappear at a certain scale remain present when using morphological openings and closings by reconstruction, which means that object size is often incorrectly represented. Morphological profiles with partial reconstruction improve upon both classical MPs and MPs with reconstruction. The shapes of objects are better preserved than classical MPs and the size information is preserved better than in reconstruction MPs. A novel semi-supervised feature extraction framework for dimension reduction of generated morphological profiles is the third contribution of this thesis. The morphological profiles (MPs) with different structuring elements and a range of increasing sizes of morphological operators produce high-dimensional data. These high-dimensional data may contain redundant information and create a new challenge for conventional classification methods, especially for the classifiers which are not robust to the Hughes phenomenon. To the best of our knowledge the use of semi-supervised feature extraction methods for the generated morphological profiles has not been investigated yet. The proposed generalized semi-supervised local discriminant analysis (GSELD) is an extension of SELD with a data-driven parameter. In our fourth contribution, we propose a fast iterative kernel principal component analysis (FIKPCA) to extract features from hyperspectral images. In many applications, linear FE methods, which depend on linear projection, can result in loss of nonlinear properties of the original data after reduction of dimensionality. Traditional nonlinear methods will cause some problems on storage resources and computational load. The proposed method is a kernel version of the Candid Covariance-Free Incremental Principal Component Analysis, which estimates the eigenvectors through iteration. Without performing eigen decomposition on the Gram matrix, our approach can reduce the space complexity and time complexity greatly. Our last contribution constructs MPs with partial reconstruction on nonlinear features. Traditional linear features, on which the morphological profiles usually are built, lose too much spectral information. Nonlinear features are more suitable to describe higher order complex and nonlinear distributions. In particular, kernel principal components are among the nonlinear features we used to built MPs with partial reconstruction, which led to significant improvement in terms of classification accuracies. The experimental analysis performed with the novel techniques developed in this thesis demonstrates an improvement in terms of accuracies in different fields of application when compared to other state of the art methods

    Faithful visualization and dimensionality reduction on graphics processing unit

    Get PDF
    Information visualization is a process of transforming data, information and knowledge to the geometric representation in order to see unseen information. Dimensionality reduction (DR) is one of the strategies used to visualize high-dimensional data sets by projecting them onto low-dimensional space where they can be visualized directly. The problem of DR is that the straightforward relationship between the original highdimensional data sets and low-dimensional space is lost, which causes the colours of visualization to have no meaning. A new nonlinear DR method which is called faithful stochastic proximity embedding (FSPE) is proposed in this thesis to visualize more complex data sets. The proposed method depends on the low-dimensional space rather than the high-dimensional data sets to overcome the main shortcomings of the DR by overcoming the false neighbour points, and preserving the neighbourhood relation to the true neighbours. The visualization by our proposed method displays the faithful, useful and meaningful colours, where the objects of the image can be easily distinguished. The experiments that were conducted indicated that the FSPE is higher in accuracy than many dimension reduction methods because it prevents as much as possible the false neighbourhood errors to occur in the results. In addition, in the results of other methods, we have demonstrated that the FSPE has an important role in enhancing the low-dimensional space which are carried by other DR methods. Choosing the worst efficient points to update the rest of the points has helped in improving the visualization information. The results showed the proposed method has an impacting role in increasing the trustworthiness of the visualization by retrieving most of the local neighbourhood points, which they missed during the projection process. The sequential dimensionality reduction (SDR) method is the second proposed method in this thesis. It redefines the problem of DR as a sequence of multiple DR problems, each of which reduces the dimensionality by a small amount. It maintains and preserves the relations among neighbour points in low-dimensional space. The results showed the accuracy of the proposed SDR, which leads to a better visualization with minimum false colours compared to the direct projection of the DR method, where those results are confirmed by comparing our method with 21 other methods. Although there are many measurement metrics, our proposed point-wise correlation metric is the better. In this metric, we evaluate the efficiency of each point in the visualization to generate a grey-scale efficiency image. This type of image gives more details instead of representing the evaluation in one single value. The user can recognize the location of both the false and the true points. We compared the results of our proposed methods (FSPE and SDR) and many other dimension reduction methods when applied to four scenarios: (1) the unfolding curved cylinder data sets; (2) projecting a human face data sets into two dimensions; (3) classifing connected networks and (4) visualizing a remote sensing imagery data sets. The results showed that our methods are able to produce good visualization by preserving the corresponding colour distances between the visualization and the original data sets. The proposed methods are implemented on the graphic processing unit (GPU) to visualize different data sets. The benefit of a parallel implementation is to obtain the results in as short a time as possible. The results showed that compute unified device architecture (CUDA) implementation of FSPE and SDR are faster than their sequential codes on the central processing unit (CPU) in calculating floating-point operations, especially for a large data sets. The GPU is also more suited to the implementation of the metric measurement methods because they do a large computation. We illustrated that this massive speed-up requires a parallel structure to be suitable for running on a GPU
    corecore