11 research outputs found
Image Restoration for Remote Sensing: Overview and Toolbox
Remote sensing provides valuable information about objects or areas from a
distance in either active (e.g., RADAR and LiDAR) or passive (e.g.,
multispectral and hyperspectral) modes. The quality of data acquired by
remotely sensed imaging sensors (both active and passive) is often degraded by
a variety of noise types and artifacts. Image restoration, which is a vibrant
field of research in the remote sensing community, is the task of recovering
the true unknown image from the degraded observed image. Each imaging sensor
induces unique noise types and artifacts into the observed image. This fact has
led to the expansion of restoration techniques in different paths according to
each sensor type. This review paper brings together the advances of image
restoration techniques with particular focuses on synthetic aperture radar and
hyperspectral images as the most active sub-fields of image restoration in the
remote sensing community. We, therefore, provide a comprehensive,
discipline-specific starting point for researchers at different levels (i.e.,
students, researchers, and senior researchers) willing to investigate the
vibrant topic of data restoration by supplying sufficient detail and
references. Additionally, this review paper accompanies a toolbox to provide a
platform to encourage interested students and researchers in the field to
further explore the restoration techniques and fast-forward the community. The
toolboxes are provided in https://github.com/ImageRestorationToolbox.Comment: This paper is under review in GRS
Robust Constrained Hyperspectral Unmixing Using Reconstructed-Image Regularization
Hyperspectral (HS) unmixing is the process of decomposing an HS image into
material-specific spectra (endmembers) and their spatial distributions
(abundance maps). Existing unmixing methods have two limitations with respect
to noise robustness. First, if the input HS image is highly noisy, even if the
balance between sparse and piecewise-smooth regularizations for abundance maps
is carefully adjusted, noise may remain in the estimated abundance maps or
undesirable artifacts may appear. Second, existing methods do not explicitly
account for the effects of stripe noise, which is common in HS measurements, in
their formulations, resulting in significant degradation of unmixing
performance when such noise is present in the input HS image. To overcome these
limitations, we propose a new robust hyperspectral unmixing method based on
constrained convex optimization. Our method employs, in addition to the two
regularizations for the abundance maps, regularizations for the HS image
reconstructed by mixing the estimated abundance maps and endmembers. This
strategy makes the unmixing process much more robust in highly-noisy scenarios,
under the assumption that the abundance maps used to reconstruct the HS image
with desirable spatio-spectral structure are also expected to have desirable
properties. Furthermore, our method is designed to accommodate a wider variety
of noise including stripe noise. To solve the formulated optimization problem,
we develop an efficient algorithm based on a preconditioned primal-dual
splitting method, which can automatically determine appropriate stepsizes based
on the problem structure. Experiments on synthetic and real HS images
demonstrate the advantages of our method over existing methods.Comment: Submitted to IEEE Transactions on Geoscience and Remote Sensin
A Constrained Convex Optimization Approach to Hyperspectral Image Restoration with Hybrid Spatio-Spectral Regularization
We propose a new constrained optimization approach to hyperspectral (HS)
image restoration. Most existing methods restore a desirable HS image by
solving some optimization problem, which consists of a regularization term(s)
and a data-fidelity term(s). The methods have to handle a regularization
term(s) and a data-fidelity term(s) simultaneously in one objective function,
and so we need to carefully control the hyperparameter(s) that balances these
terms. However, the setting of such hyperparameters is often a troublesome task
because their suitable values depend strongly on the regularization terms
adopted and the noise intensities on a given observation. Our proposed method
is formulated as a convex optimization problem, where we utilize a novel hybrid
regularization technique named Hybrid Spatio-Spectral Total Variation (HSSTV)
and incorporate data-fidelity as hard constraints. HSSTV has a strong ability
of noise and artifact removal while avoiding oversmoothing and spectral
distortion, without combining other regularizations such as low-rank
modeling-based ones. In addition, the constraint-type data-fidelity enables us
to translate the hyperparameters that balance between regularization and
data-fidelity to the upper bounds of the degree of data-fidelity that can be
set in a much easier manner. We also develop an efficient algorithm based on
the alternating direction method of multipliers (ADMM) to efficiently solve the
optimization problem. Through comprehensive experiments, we illustrate the
advantages of the proposed method over various HS image restoration methods
including state-of-the-art ones.Comment: 20 pages, 4 tables, 10 figures, submitted to MDPI Remote Sensin
Machine Learning Approach to Retrieving Physical Variables from Remotely Sensed Data
Scientists from all over the world make use of remotely sensed data from hundreds of satellites to better understand the Earth. However, physical measurements from an instrument is sometimes missing either because the instrument hasn\u27t been launched yet or the design of the instrument omitted a particular spectral band. Measurements received from the instrument may also be corrupt due to malfunction in the detectors on the instrument. Fortunately, there are machine learning techniques to estimate the missing or corrupt data. Using these techniques we can make use of the available data to its full potential.
We present work on four different problems where the use of machine learning techniques helps to extract more information from available data. We demonstrate how missing or corrupt spectral measurements from a sensor can be accurately interpolated from existing spectral observations. Sometimes this requires data fusion from multiple sensors at different spatial and spectral resolution. The reconstructed measurements can then be used to develop products useful to scientists, such as cloud-top pressure, or produce true color imagery for visualization. Additionally, segmentation and image processing techniques can help solve classification problems important for ocean studies, such as the detection of clear-sky over ocean for a sea surface temperature product. In each case, we provide detailed analysis of the problem and empirical evidence that these problems can be solved effectively using machine learning techniques
Function-valued Mappings and SSIM-based Optimization in Imaging
In a few words, this thesis is concerned with two alternative approaches to imag- ing, namely, Function-valued Mappings (FVMs) and Structural Similarity Index Measure (SSIM)-based Optimization. Briefly, a FVM is a mathematical object that assigns to each element in its domain a function that belongs to a given function space. The advantage of this representation is that the infinite dimensionality of the range of FVMs allows us to give a more accurate description of complex datasets such as hyperspectral images and diffusion magnetic resonance images, something that can not be done with the classical representation of such data sets as vector-valued functions. For instance, a hyperspectral image can be described as a FVM that assigns to each point in a spatial domain a spectral function that belongs to the function space L2(R); that is, the space of functions whose energy is finite. Moreoever, we present a Fourier transform and a new class of fractal transforms for FVMs to analyze and process hyperspectral images.
Regarding SSIM-based optimization, we introduce a general framework for solving op- timization problems that involve the SSIM as a fidelity measure. This framework offers the option of carrying out SSIM-based imaging tasks which are usually addressed using the classical Euclidean-based methods. In the literature, SSIM-based approaches have been proposed to address the limitations of Euclidean-based metrics as measures of vi- sual quality. These methods show better performance when compared to their Euclidean counterparts since the SSIM is a better model of the human visual system; however, these approaches tend to be developed for particular applications. With the general framework that it is presented in this thesis, rather than focusing on particular imaging tasks, we introduce a set of novel algorithms capable of carrying out a wide range of SSIM-based imaging applications. Moreover, such a framework allows us to include the SSIM as a fidelity term in optimization problems in which it had not been included before
Scale-Wavelength Decomposition of Hyperspectral Signals - Use for Mineral Classification & Quantification
An approach for material identification & soil constituent quantification based on a generalized multi-scale derivative analysis of hyperspectral signals is presented. It employs the continuous wavelet transform to project input spectra onto a scale-wavelength space. This allows investigating the spectra at selectable level of detail while normalizing/separating disturbances. Benefits & challenges of this decomposition for mineral classification & quantification will be shown for a mining site
The LANDSAT Tutorial Workbook: Basics of Satellite Remote Sensing
Most of the subject matter of a full training course in applying remote sensing is presented in a self-teaching mode in this how-to manual which combines a review of basics, a survey of systems, and a treatment of the principles and mechanics of image analysis by computers, with a laboratory approach for learning to utilize the data through practical experiences. All relevant image products are included
Scalable Low-rank Matrix and Tensor Decomposition on Graphs
In many signal processing, machine learning and computer vision applications, one often has to deal with high dimensional and big datasets such as images, videos, web content, etc. The data can come in various forms, such as univariate or multivariate time series, matrices or high dimensional tensors. The goal of the data mining community is to reveal the hidden linear or non-linear structures in the datasets. Over the past couple of decades matrix factorization, owing to its intrinsic association with dimensionality reduction has been adopted as one of the key methods in this context. One can either use a single linear subspace to approximate the data (the standard Principal Component Analysis (PCA) approach) or a union of low dimensional subspaces where each data class belongs to a different subspace. In many cases, however, the low dimensional data follows some additional structure. Knowledge of such structure is beneficial, as we can use it to enhance the representativity of our models by adding structured priors. A nowadays standard way to represent pairwise affinity between objects is by using graphs. The introduction of graph-based priors to enhance matrix factorization models has recently brought them back to the highest attention of the data mining community. Representation of a signal on a graph is well motivated by the emerging field of signal processing on graphs, based on notions of spectral graph theory. The underlying assumption is that high-dimensional data samples lie on or close to a smooth low-dimensional manifold. Interestingly, the underlying manifold can be represented by its discrete proxy, i.e. a graph. A primary limitation of the state-of-the-art low-rank approximation methods is that they do not generalize for the case of non-linear low-rank structures. Furthermore, the standard low-rank extraction methods for many applications, such as low-rank and sparse decomposition, are computationally cumbersome. We argue, that for many machine learning and signal processing applications involving big data, an approximate low-rank recovery suffices. Thus, in this thesis, we present solutions to the above two limitations by presenting a new framework for scalable but approximate low-rank extraction which exploits the hidden structure in the data using the notion of graphs. First, we present a novel signal model, called `Multilinear low-rank tensors on graphs (MLRTG)' which states that a tensor can be encoded as a multilinear combination of the low-frequency graph eigenvectors, where the graphs are constructed along the various modes of the tensor. Since the graph eigenvectors have the interpretation of \textit{non-linear} embedding of a dataset on the low-dimensional manifold, we propose a method called `Graph Multilinear SVD (GMLSVD)' to recover PCA based linear subspaces from these eigenvectors. Finally, we propose a plethora of highly scalable matrix and tensor based problems for low-rank extraction which implicitly or explicitly make use of the GMLSVD framework. The core idea is to replace the expensive iterative SVD operations by updating the linear subspaces from the fixed non-linear ones via low-cost operations. We present applications in low-rank and sparse decomposition and clustering of the low-rank features to evaluate all the proposed methods. Our theoretical analysis shows that the approximation error of the proposed framework depends on the spectral properties of the graph Laplacian