2,239 research outputs found

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    Non-negative mixtures

    Get PDF
    This is the author's accepted pre-print of the article, first published as M. D. Plumbley, A. Cichocki and R. Bro. Non-negative mixtures. In P. Comon and C. Jutten (Ed), Handbook of Blind Source Separation: Independent Component Analysis and Applications. Chapter 13, pp. 515-547. Academic Press, Feb 2010. ISBN 978-0-12-374726-6 DOI: 10.1016/B978-0-12-374726-6.00018-7file: Proof:p\PlumbleyCichockiBro10-non-negative.pdf:PDF owner: markp timestamp: 2011.04.26file: Proof:p\PlumbleyCichockiBro10-non-negative.pdf:PDF owner: markp timestamp: 2011.04.2

    Sparsity and adaptivity for the blind separation of partially correlated sources

    Get PDF
    Blind source separation (BSS) is a very popular technique to analyze multichannel data. In this context, the data are modeled as the linear combination of sources to be retrieved. For that purpose, standard BSS methods all rely on some discrimination principle, whether it is statistical independence or morphological diversity, to distinguish between the sources. However, dealing with real-world data reveals that such assumptions are rarely valid in practice: the signals of interest are more likely partially correlated, which generally hampers the performances of standard BSS methods. In this article, we introduce a novel sparsity-enforcing BSS method coined Adaptive Morphological Component Analysis (AMCA), which is designed to retrieve sparse and partially correlated sources. More precisely, it makes profit of an adaptive re-weighting scheme to favor/penalize samples based on their level of correlation. Extensive numerical experiments have been carried out which show that the proposed method is robust to the partial correlation of sources while standard BSS techniques fail. The AMCA algorithm is evaluated in the field of astrophysics for the separation of physical components from microwave data.Comment: submitted to IEEE Transactions on signal processin

    Localization and the interface between quantum mechanics, quantum field theory and quantum gravity I (The two antagonistic localizations and their asymptotic compatibility)

    Full text link
    It is shown that there are significant conceptual differences between QM and QFT which make it difficult to view the latter as just a relativistic extension of the principles of QM. At the root of this is a fundamental distiction between Born-localization in QM (which in the relativistic context changes its name to Newton-Wigner localization) and modular localization which is the localization underlying QFT, after one separates it from its standard presentation in terms of field coordinates. The first comes with a probability notion and projection operators, whereas the latter describes causal propagation in QFT and leads to thermal aspects of locally reduced finite energy states. The Born-Newton-Wigner localization in QFT is only applicable asymptotically and the covariant correlation between asymptotic in and out localization projectors is the basis of the existence of an invariant scattering matrix. In this first part of a two part essay the modular localization (the intrinsic content of field localization) and its philosophical consequences take the center stage. Important physical consequences of vacuum polarization will be the main topic of part II. Both parts together form a rather comprehensive presentation of known consequences of the two antagonistic localization concepts, including the those of its misunderstandings in string theory.Comment: 63 pages corrections, reformulations, references adde

    Factorized Geometrical Autofocus for Synthetic Aperture Radar Processing

    Get PDF
    Synthetic Aperture Radar (SAR) imagery is a very useful resource for the civilian remote sensing community and for the military. This however presumes that images are focused. There are several possible sources for defocusing effects. For airborne SAR, motion measurement errors is the main cause. A defocused image may be compensated by way of autofocus, estimating and correcting erroneous phase components. Standard autofocus strategies are implemented as a separate stage after the image formation (stand-alone autofocus), neglecting the geometrical aspect. In addition, phase errors are usually assumed to be space invariant and confined to one dimension. The call for relaxed requirements on inertial measurement systems contradicts these criteria, as it may introduce space variant phase errors in two dimensions, i.e. residual space variant Range Cell Migration (RCM). This has motivated the development of a new autofocus approach. The technique, termed the Factorized Geometrical Autofocus (FGA) algorithm, is in principle a Fast Factorized Back-Projection (FFBP) realization with a number of adjustable (geometry) parameters for each factorization step. By altering the aperture in the time domain, it is possible to correct an arbitrary, inaccurate geometry. This in turn indicates that the FGA algorithm has the capacity to compensate for residual space variant RCM. In appended papers the performance of the algorithm is demonstrated for geometrically constrained autofocus problems. Results for simulated and real (Coherent All RAdio BAnd System II (CARABAS II)) Ultra WideBand (UWB) data sets are presented. Resolution and Peak to SideLobe Ratio (PSLR) values for (point/point-like) targets in FGA and reference images are similar within a few percents and tenths of a dB. As an example: the resolution of a trihedral reflector in a reference image and in an FGA image respectively, was measured to approximately 3.36 m/3.44 m in azimuth, and to 2.38 m/2.40 m in slant range; the PSLR was in addition measured to about 6.8 dB/6.6 dB. The advantage of a geometrical autofocus approach is clarified further by comparing the FGA algorithm to a standard strategy, in this case the Phase Gradient Algorithm (PGA)
    corecore