170 research outputs found

    Hyperspectral Image Unmixing Incorporating Adjacency Information

    Get PDF
    While the spectral information contained in hyperspectral images is rich, the spatial resolution of such images is in many cases very low. Many pixel spectra are mixtures of pure materials’ spectra and therefore need to be decomposed into their constituents. This work investigates new decomposition methods taking into account spectral, spatial and global 3D adjacency information. This allows for faster and more accurate decomposition results

    Nonlinear unmixing of hyperspectral images using a semiparametric model and spatial regularization

    Full text link
    Incorporating spatial information into hyperspectral unmixing procedures has been shown to have positive effects, due to the inherent spatial-spectral duality in hyperspectral scenes. Current research works that consider spatial information are mainly focused on the linear mixing model. In this paper, we investigate a variational approach to incorporating spatial correlation into a nonlinear unmixing procedure. A nonlinear algorithm operating in reproducing kernel Hilbert spaces, associated with an â„“1\ell_1 local variation norm as the spatial regularizer, is derived. Experimental results, with both synthetic and real data, illustrate the effectiveness of the proposed scheme.Comment: 5 pages, 1 figure, submitted to ICASSP 201

    Graph Construction for Hyperspectral Data Unmixing

    Get PDF
    This chapter presents graph construction for hyperspectral data and associated unmixing methods based on graph regularization. Graph is a ubiquitous mathematical tool for modeling relations between objects under study. In the context of hyperspectral image analysis, constructing graphs can be useful to relate pixels in order to perform corporative analysis instead of analyzing each pixel individually. In this chapter, we review fundamental elements of graphs and present different ways to construct graphs in both spatial and spectral senses for hyperspectral images. By incorporating a graph regularization, we then formulate a general hyperspectral unmixing problem that can be important for applications such as remote sensing and environment monitoring. Alternating direction method of multipliers (ADMM) is also presented as a generic tool for solving the formulated unmixing problems. Experiments validate the proposed scheme with both synthetic data and real remote sensing data

    Supervised Nonlinear Unmixing of Hyperspectral Images Using a Pre-image Methods

    Get PDF
    This book is a collection of 19 articles which reflect the courses given at the Collège de France/Summer school “Reconstruction d'images − Applications astrophysiques“ held in Nice and Fréjus, France, from June 18 to 22, 2012. The articles presented in this volume address emerging concepts and methods that are useful in the complex process of improving our knowledge of the celestial objects, including Earth

    Graph Convolutional Network Using Adaptive Neighborhood Laplacian Matrix for Hyperspectral Images with Application to Rice Seed Image Classification

    Get PDF
    Graph convolutional neural network architectures combine feature extraction and convolutional layers for hyperspectral image classification. An adaptive neighborhood aggregation method based on statistical variance integrating the spatial information along with the spectral signature of the pixels is proposed for improving graph convolutional network classification of hyperspectral images. The spatial-spectral information is integrated into the adjacency matrix and processed by a single-layer graph convolutional network. The algorithm employs an adaptive neighborhood selection criteria conditioned by the class it belongs to. Compared to fixed window-based feature extraction, this method proves effective in capturing the spectral and spatial features with variable pixel neighborhood sizes. The experimental results from the Indian Pines, Houston University, and Botswana Hyperion hyperspectral image datasets show that the proposed AN-GCN can significantly improve classification accuracy. For example, the overall accuracy for Houston University data increases from 81.71% (MiniGCN) to 97.88% (AN-GCN). Furthermore, the AN-GCN can classify hyperspectral images of rice seeds exposed to high day and night temperatures, proving its efficacy in discriminating the seeds under increased ambient temperature treatments

    Tensor-based Hyperspectral Image Processing Methodology and its Applications in Impervious Surface and Land Cover Mapping

    Get PDF
    The emergence of hyperspectral imaging provides a new perspective for Earth observation, in addition to previously available orthophoto and multispectral imagery. This thesis focused on both the new data and new methodology in the field of hyperspectral imaging. First, the application of the future hyperspectral satellite EnMAP in impervious surface area (ISA) mapping was studied. During the search for the appropriate ISA mapping procedure for the new data, the subpixel classification based on nonnegative matrix factorization (NMF) achieved the best success. The simulated EnMAP image shows great potential in urban ISA mapping with over 85% accuracy. Unfortunately, the NMF based on the linear algebra only considers the spectral information and neglects the spatial information in the original image. The recent wide interest of applying the multilinear algebra in computer vision sheds light on this problem and raised the idea of nonnegative tensor factorization (NTF). This thesis found that the NTF has more advantages over the NMF when work with medium- rather than the high-spatial-resolution hyperspectral image. Furthermore, this thesis proposed to equip the NTF-based subpixel classification methods with the variations adopted from the NMF. By adopting the variations from the NMF, the urban ISA mapping results from the NTF were improved by ~2%. Lastly, the problem known as the curse of dimensionality is an obstacle in hyperspectral image applications. The majority of current dimension reduction (DR) methods are restricted to using only the spectral information, when the spatial information is neglected. To overcome this defect, two spectral-spatial methods: patch-based and tensor-patch-based, were thoroughly studied and compared in this thesis. To date, the popularity of the two solutions remains in computer vision studies and their applications in hyperspectral DR are limited. The patch-based and tensor-patch-based variations greatly improved the quality of dimension-reduced hyperspectral images, which then improved the land cover mapping results from them. In addition, this thesis proposed to use an improved method to produce an important intermediate result in the patch-based and tensor-patch-based DR process, which further improved the land cover mapping results

    Automated Synthetic Scene Generation

    Get PDF
    First principles, physics-based models help organizations developing new remote sensing instruments anticipate sensor performance by enabling the ability to create synthetic imagery for proposed sensor before a sensor is built. One of the largest challenges in modeling realistic synthetic imagery, however, is generating the spectrally attributed, three-dimensional scenes on which the models are based in a timely and affordable fashion. Additionally, manual and semi-automated approaches to synthetic scene construction which rely on spectral libraries may not adequately capture the spectral variability of real-world sites especially when the libraries consist of measurements made in other locations or in a lab. This dissertation presents a method to fully automate the generation of synthetic scenes when coincident lidar, Hyperspectral Imagery (HSI), and high-resolution imagery of a real-world site are available. The method, called the Lidar/HSI Direct (LHD) method, greatly reduces the time and manpower needed to generate a synthetic scene while also matching the modeled scene as closely as possible to a real-world site both spatially and spectrally. Furthermore, the LHD method enables the generation of synthetic scenes over sites in which ground access is not available providing the potential for improved military mission planning and increased ability to fuse information from multiple modalities and look angles. The LHD method quickly and accurately generates three-dimensional scenes providing the community with a tool to expand the library of synthetic scenes and therefore expand the potential applications of physics-based synthetic imagery modeling

    A manifold learning approach to target detection in high-resolution hyperspectral imagery

    Get PDF
    Imagery collected from airborne platforms and satellites provide an important medium for remotely analyzing the content in a scene. In particular, the ability to detect a specific material within a scene is of high importance to both civilian and defense applications. This may include identifying targets such as vehicles, buildings, or boats. Sensors that process hyperspectral images provide the high-dimensional spectral information necessary to perform such analyses. However, for a d-dimensional hyperspectral image, it is typical for the data to inherently occupy an m-dimensional space, with m \u3c\u3c d. In the remote sensing community, this has led to a recent increase in the use of manifold learning, which aims to characterize the embedded lower-dimensional, non-linear manifold upon which the hyperspectral data inherently lie. Classic hyperspectral data models include statistical, linear subspace, and linear mixture models, but these can place restrictive assumptions on the distribution of the data; this is particularly true when implementing traditional target detection approaches, and the limitations of these models are well-documented. With manifold learning based approaches, the only assumption is that the data reside on an underlying manifold that can be discretely modeled by a graph. The research presented here focuses on the use of graph theory and manifold learning in hyperspectral imagery. Early work explored various graph-building techniques with application to the background model of the Topological Anomaly Detection (TAD) algorithm, which is a graph theory based approach to anomaly detection. This led towards a focus on target detection, and in the development of a specific graph-based model of the data and subsequent dimensionality reduction using manifold learning. An adaptive graph is built on the data, and then used to implement an adaptive version of locally linear embedding (LLE). We artificially induce a target manifold and incorporate it into the adaptive LLE transformation; the artificial target manifold helps to guide the separation of the target data from the background data in the new, lower-dimensional manifold coordinates. Then, target detection is performed in the manifold space
    • …
    corecore