556 research outputs found
Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches
Imaging spectrometers measure electromagnetic energy scattered in their
instantaneous field view in hundreds or thousands of spectral channels with
higher spectral resolution than multispectral cameras. Imaging spectrometers
are therefore often referred to as hyperspectral cameras (HSCs). Higher
spectral resolution enables material identification via spectroscopic analysis,
which facilitates countless applications that require identifying materials in
scenarios unsuitable for classical spectroscopic analysis. Due to low spatial
resolution of HSCs, microscopic material mixing, and multiple scattering,
spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus,
accurate estimation requires unmixing. Pixels are assumed to be mixtures of a
few materials, called endmembers. Unmixing involves estimating all or some of:
the number of endmembers, their spectral signatures, and their abundances at
each pixel. Unmixing is a challenging, ill-posed inverse problem because of
model inaccuracies, observation noise, environmental conditions, endmember
variability, and data set size. Researchers have devised and investigated many
models searching for robust, stable, tractable, and accurate unmixing
algorithms. This paper presents an overview of unmixing methods from the time
of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models
are first discussed. Signal-subspace, geometrical, statistical, sparsity-based,
and spatial-contextual unmixing algorithms are described. Mathematical problems
and potential solutions are described. Algorithm characteristics are
illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensin
Hyperspectral Unmixing on Multicore DSPs: Trading Off Performance for Energy
Wider coverage of observation missions will increase
onboard power restrictions while, at the same time, pose higher
demands from the perspective of processing time, thus asking for
the exploration of novel high-performance and low-power processing
architectures. In this paper, we analyze the acceleration
of spectral unmixing, a key technique to process hyperspectral
images, on multicore architectures. To meet onboard processing
restrictions, we employ a low-power Digital Signal Processor
(DSP), comparing processing time and energy consumption with
those of a representative set of commodity architectures. We
demonstrate that DSPs offer a fair balance between ease of
programming, performance, and energy consumption, resulting
in a highly appealing platform to meet the restrictions of current
missions if onboard processing is required
Multi-source imagery fusion using deep learning in a cloud computing platform
Given the high availability of data collected by different remote sensing
instruments, the data fusion of multi-spectral and hyperspectral images (HSI)
is an important topic in remote sensing. In particular, super-resolution as a
data fusion application using spatial and spectral domains is highly
investigated because its fused images is used to improve the classification and
tracking objects accuracy. On the other hand, the huge amount of data obtained
by remote sensing instruments represent a key concern in terms of data storage,
management and pre-processing. This paper proposes a Big Data Cloud platform
using Hadoop and Spark to store, manages, and process remote sensing data.
Also, a study over the parameter \textit{chunk size} is presented to suggest
the appropriate value for this parameter to download imagery data from Hadoop
into a Spark application, based on the format of our data. We also developed an
alternative approach based on Long Short Term Memory trained with different
patch sizes for super-resolution image. This approach fuse hyperspectral and
multispectral images. As a result, we obtain images with high-spatial and
high-spectral resolution. The experimental results show that for a chunk size
of 64k, an average of 3.5s was required to download data from Hadoop into a
Spark application. The proposed model for super-resolution provides a
structural similarity index of 0.98 and 0.907 for the used dataset
Efficient multitemporal change detection techniques for hyperspectral images on GPU
Hyperspectral images contain hundreds of reflectance values for each pixel.
Detecting regions of change in multiple hyperspectral images of the same
scene taken at different times is of widespread interest for a large number of
applications. For remote sensing, in particular, a very common application is
land-cover analysis. The high dimensionality of the hyperspectral images
makes the development of computationally efficient processing schemes
critical. This thesis focuses on the development of change detection
approaches at object level, based on supervised direct multidate
classification, for hyperspectral datasets. The proposed approaches improve
the accuracy of current state of the art algorithms and their projection onto
Graphics Processing Units (GPUs) allows their execution in real-time
scenarios
A Novel Methodology for Calculating Large Numbers of Symmetrical Matrices on a Graphics Processing Unit: Towards Efficient, Real-Time Hyperspectral Image Processing
Hyperspectral imagery (HSI) is often processed to identify targets of interest. Many of the quantitative analysis techniques developed for this purpose mathematically manipulate the data to derive information about the target of interest based on local spectral covariance matrices. The calculation of a local spectral covariance matrix for every pixel in a given hyperspectral data scene is so computationally intensive that real-time processing with these algorithms is not feasible with today’s general purpose processing solutions. Specialized solutions are cost prohibitive, inflexible, inaccessible, or not feasible for on-board applications.
Advances in graphics processing unit (GPU) capabilities and programmability offer an opportunity for general purpose computing with access to hundreds of processing cores in a system that is affordable and accessible. The GPU also offers flexibility, accessibility and feasibility that other specialized solutions do not offer. The architecture for the NVIDIA GPU used in this research is significantly different from the architecture of other parallel computing solutions. With such a substantial change in architecture it follows that the paradigm for programming graphics hardware is significantly different from traditional serial and parallel software development paradigms.
In this research a methodology for mapping an HSI target detection algorithm to the NVIDIA GPU hardware and Compute Unified Device Architecture (CUDA) Application Programming Interface (API) is developed. The RX algorithm is chosen as a representative stochastic HSI algorithm that requires the calculation of a spectral covariance matrix. The developed methodology is designed to calculate a local covariance matrix for every pixel in the input HSI data scene.
A characterization of the limitations imposed by the chosen GPU is given and a path forward toward optimization of a GPU-based method for real-time HSI data processing is defined
Commodity Computing Clusters at Goddard Space Flight Center
The purpose of commodity cluster computing is to utilize large numbers of readily available computing components for parallel computing to obtaining the greatest amount of useful computations for the least cost. The issue of the cost of a computational resource is key to computational science and data processing at GSFC as it is at most other places, the difference being that the need at GSFC far exceeds any expectation of meeting that need. Therefore, Goddard scientists need as much computing resources that are available for the provided funds. This is exemplified in the following brief history of low-cost high-performance computing at GSFC
Inference in supervised spectral classifiers for on-board hyperspectral imaging: An overview
Machine learning techniques are widely used for pixel-wise classification of hyperspectral images. These methods can achieve high accuracy, but most of them are computationally intensive models. This poses a problem for their implementation in low-power and embedded systems intended for on-board processing, in which energy consumption and model size are as important as accuracy. With a focus on embedded anci on-board systems (in which only the inference step is performed after an off-line training process), in this paper we provide a comprehensive overview of the inference properties of the most relevant techniques for hyperspectral image classification. For this purpose, we compare the size of the trained models and the operations required during the inference step (which are directly related to the hardware and energy requirements). Our goal is to search for appropriate trade-offs between on-board implementation (such as model size anci energy consumption) anci classification accuracy
- …