18 research outputs found

    FSPE: Visualization of Hyperspectral Imagery Using Faithful Stochastic Proximity Embedding

    Get PDF
    Hyperspectral image visualization reduces color bands to three, but prevailing linear methods fail to address data characteristics, and nonlinear embeddings are computationally demanding. Qualitative evaluation of embedding is also lacking. We propose faithful stochastic proximity embedding (FSPE), which is a scalable and nonlinear dimensionality reduction method. FSPE considers the nonlinear characteristics of spectral signatures, yet it avoids the costly computation of geodesic distances that are often required by other nonlinear methods. Furthermore, we employ a pixelwise metric that measures the quality of hyperspectral image visualization at each pixel. FSPE outperforms the state-of-art methods by at least 12% on average and up to 25% in the qualitative measure. An implementation on graphics processing units is two orders of magnitude faster than the baseline. Our method opens the path to high-fidelity and real-time analysis of hyperspectral images

    Visualization of hyperspectral images on parallel and distributed platform: Apache Spark

    Get PDF
    The field of hyperspectral image storage and processing has undergone a remarkable evolution in recent years. The visualization of these images represents a challenge as the number of bands exceeds three bands, since direct visualization using the trivial system red, green and blue (RGB) or hue, saturation and lightness (HSL) is not feasible. One potential solution to resolve this problem is the reduction of the dimensionality of the image to three dimensions and thereafter assigning each dimension to a color. Conventional tools and algorithms have become incapable of producing results within a reasonable time. In this paper, we present a new distributed method of visualization of hyperspectral image based on the principal component analysis (PCA) and implemented in a distributed parallel environment (Apache Spark). The visualization of the big hyperspectral images with the proposed method is made in a smaller time and with the same performance as the classical method of visualization

    Perceptual Display Strategies of Hyperspectral Imagery Based on PCA and ICA

    Get PDF
    This study investigated appropriate methodologies for displaying hyperspectral imagery based on knowledge of human color vision as applied to Hyperion and AVIRIS data. Principal Component Analysis (PCA) and Independent Component Analysis (ICA) were used to reduce the data dimensionality in order to make the data more amenable to visualization in three-dimensional color space. In addition, these two methods were chosen because of their underlying relationships to the opponent color model of human color perception. PCA and ICA-based visualization strategies were then explored by mapping the first three PCs or ICs to several opponent color spaces including CIELAB, HSV, YCrCb, and YUV. The gray world assumption, which states that given an image with sufficient amount of color variations, the average color should be gray, was used to set the mapping origins. The rendered images are well color balanced and can offer a first look capability or initial classification for a wide variety of spectral scenes

    Visualization of High-dimensional Remote-Sensing Data Products

    Get PDF
    This study investigated appropriate methodologies for displaying hyperspectral imagery based on knowledge of human color vision as applied to Hyperion and AVIRIS data. Principal Component Analysis (PCA) and Independent Component Analysis (ICA) were used to reduce the data dimensionality, and these two methods were chosen also because of their underlying relationships to the opponent color model of human color perception. PCA and ICA-based strategies were then explored by mapping the first three PC or IC to several opponent color spaces including CIELAB, HSV, YCbCr, and YIQ. The gray world assumption, which states that given an image with sufficient amount of color variations, the average color should be gray, was used to set the mapping origins. The rendered images are well color balanced and can offer a first look capability or initial classification for a wide variety of spectral scenes. I

    wEscore: quality assessment method of multichannel image visualization with regard to angular resolution

    Get PDF
    This work considers the problem of quality assessment of multichannel image visualization methods. One approach to such an assessment, the Escore quality measure, is studied. This measure, initially proposed for decolorization methods evaluation, can be generalized for the assessment of hyperspectral image visualization methods. It is shown that Escore does not account for the loss of local contrast at the supra-pixel scale. The sensitivity to the latter in humans depends on the observation conditions, so we propose a modified wEscore measure which includes the parameters allowing for the adjustment of the local contrast scale based on the angular resolution of the images. We also describe the adjustment of wEscore parameters for the evaluation of known decolorization algorithms applied to the images from the COLOR250 and the Cadik datasets with given observational conditions. When ranking the results of these algorithms and comparing it to the ranking based on human perception, wEscore turned out to be more accurate than Escore.This work was supported by Russian Science Foundation (Project No. 20-61-47089)

    DETECTION OF SHIP TARGETS IN POLARIMETRIC SAR DATA USING 2D-PCA DATA FUSION

    Get PDF

    High dimensional land cover inference using remotely sensed MODIS data

    Full text link
    Image segmentation persists as a major statistical problem, with the volume and complexity of data expanding alongside new technologies. Land cover classification, one of the most studied problems in Remote Sensing, provides an important example of image segmentation whose needs transcend the choice of a particular classification method. That is, the challenges associated with land cover classification pervade the analysis process from data pre-processing to estimation of a final land cover map. Many of the same challenges also plague the task of land cover change detection. Multispectral, multitemporal data with inherent spatial relationships have hardly received adequate treatment due to the large size of the data and the presence of missing values. In this work we propose a novel, concerted application of methods which provide a unified way to estimate model parameters, impute missing data, reduce dimensionality, classify land cover, and detect land cover changes. This comprehensive analysis adopts a Bayesian approach which incorporates prior knowledge to improve the interpretability, efficiency, and versatility of land cover classification and change detection. We explore a parsimonious, parametric model that allows for a natural application of principal components analysis to isolate important spectral characteristics while preserving temporal information. Moreover, it allows us to impute missing data and estimate parameters via expectation-maximization (EM). A significant byproduct of our framework includes a suite of training data assessment tools. To classify land cover, we employ a spanning tree approximation to a lattice Potts prior to incorporate spatial relationships in a judicious way and more efficiently access the posterior distribution of pixel labels. We then achieve exact inference of the labels via the centroid estimator. To detect land cover changes, we develop a new EM algorithm based on the same parametric model. We perform simulation studies to validate our models and methods, and conduct an extensive continental scale case study using MODIS data. The results show that we successfully classify land cover and recover the spatial patterns present in large scale data. Application of our change point method to an area in the Amazon successfully identifies the progression of deforestation through portions of the region
    corecore