62,499 research outputs found
Technology assessment of advanced automation for space missions
Six general classes of technology requirements derived during the mission definition phase of the study were identified as having maximum importance and urgency, including autonomous world model based information systems, learning and hypothesis formation, natural language and other man-machine communication, space manufacturing, teleoperators and robot systems, and computer science and technology
A fast and accurate basis pursuit denoising algorithm with application to super-resolving tomographic SAR
regularization is used for finding sparse solutions to an
underdetermined linear system. As sparse signals are widely expected in remote
sensing, this type of regularization scheme and its extensions have been widely
employed in many remote sensing problems, such as image fusion, target
detection, image super-resolution, and others and have led to promising
results. However, solving such sparse reconstruction problems is
computationally expensive and has limitations in its practical use. In this
paper, we proposed a novel efficient algorithm for solving the complex-valued
regularized least squares problem. Taking the high-dimensional
tomographic synthetic aperture radar (TomoSAR) as a practical example, we
carried out extensive experiments, both with simulation data and real data, to
demonstrate that the proposed approach can retain the accuracy of second order
methods while dramatically speeding up the processing by one or two orders.
Although we have chosen TomoSAR as the example, the proposed method can be
generally applied to any spectral estimation problems.Comment: 11 pages, IEEE Transactions on Geoscience and Remote Sensin
Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches
Imaging spectrometers measure electromagnetic energy scattered in their
instantaneous field view in hundreds or thousands of spectral channels with
higher spectral resolution than multispectral cameras. Imaging spectrometers
are therefore often referred to as hyperspectral cameras (HSCs). Higher
spectral resolution enables material identification via spectroscopic analysis,
which facilitates countless applications that require identifying materials in
scenarios unsuitable for classical spectroscopic analysis. Due to low spatial
resolution of HSCs, microscopic material mixing, and multiple scattering,
spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus,
accurate estimation requires unmixing. Pixels are assumed to be mixtures of a
few materials, called endmembers. Unmixing involves estimating all or some of:
the number of endmembers, their spectral signatures, and their abundances at
each pixel. Unmixing is a challenging, ill-posed inverse problem because of
model inaccuracies, observation noise, environmental conditions, endmember
variability, and data set size. Researchers have devised and investigated many
models searching for robust, stable, tractable, and accurate unmixing
algorithms. This paper presents an overview of unmixing methods from the time
of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models
are first discussed. Signal-subspace, geometrical, statistical, sparsity-based,
and spatial-contextual unmixing algorithms are described. Mathematical problems
and potential solutions are described. Algorithm characteristics are
illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensin
Spectral Unmixing with Multiple Dictionaries
Spectral unmixing aims at recovering the spectral signatures of materials,
called endmembers, mixed in a hyperspectral or multispectral image, along with
their abundances. A typical assumption is that the image contains one pure
pixel per endmember, in which case spectral unmixing reduces to identifying
these pixels. Many fully automated methods have been proposed in recent years,
but little work has been done to allow users to select areas where pure pixels
are present manually or using a segmentation algorithm. Additionally, in a
non-blind approach, several spectral libraries may be available rather than a
single one, with a fixed number (or an upper or lower bound) of endmembers to
chose from each. In this paper, we propose a multiple-dictionary constrained
low-rank matrix approximation model that address these two problems. We propose
an algorithm to compute this model, dubbed M2PALS, and its performance is
discussed on both synthetic and real hyperspectral images
Super-resolving multiresolution images with band-independant geometry of multispectral pixels
A new resolution enhancement method is presented for multispectral and
multi-resolution images, such as these provided by the Sentinel-2 satellites.
Starting from the highest resolution bands, band-dependent information
(reflectance) is separated from information that is common to all bands
(geometry of scene elements). This model is then applied to unmix
low-resolution bands, preserving their reflectance, while propagating
band-independent information to preserve the sub-pixel details. A reference
implementation is provided, with an application example for super-resolving
Sentinel-2 data.Comment: Source code with a ready-to-use script for super-resolving Sentinel-2
data is available at http://nicolas.brodu.net/recherche/superres
Non-Local Compressive Sensing Based SAR Tomography
Tomographic SAR (TomoSAR) inversion of urban areas is an inherently sparse
reconstruction problem and, hence, can be solved using compressive sensing (CS)
algorithms. This paper proposes solutions for two notorious problems in this
field: 1) TomoSAR requires a high number of data sets, which makes the
technique expensive. However, it can be shown that the number of acquisitions
and the signal-to-noise ratio (SNR) can be traded off against each other,
because it is asymptotically only the product of the number of acquisitions and
SNR that determines the reconstruction quality. We propose to increase SNR by
integrating non-local estimation into the inversion and show that a reasonable
reconstruction of buildings from only seven interferograms is feasible. 2)
CS-based inversion is computationally expensive and therefore barely suitable
for large-scale applications. We introduce a new fast and accurate algorithm
for solving the non-local L1-L2-minimization problem, central to CS-based
reconstruction algorithms. The applicability of the algorithm is demonstrated
using simulated data and TerraSAR-X high-resolution spotlight images over an
area in Munich, Germany.Comment: 10 page
Nonlinear unmixing of hyperspectral images: Models and algorithms
When considering the problem of unmixing hyperspectral images, most of the literature in the geoscience and image processing areas relies on the widely used linear mixing model (LMM). However, the LMM may be not valid, and other nonlinear models need to be considered, for instance, when there are multiscattering effects or intimate interactions. Consequently, over the last few years, several significant contributions have been proposed to overcome the limitations inherent in the LMM. In this article, we present an overview of recent advances in nonlinear unmixing modeling
- …