8 research outputs found

    Nonnegative tensor CP decomposition of hyperspectral data

    No full text
    International audienceNew hyperspectral missions will collect huge amounts of hyperspectral data. Besides, it is possible now to acquire time series and multiangular hyperspectral images. The process and analysis of these big data collections will require common hyperspectral techniques to be adapted or reformulated. The tensor decomposition, \textit{a.k.a.} multiway analysis, is a technique to decompose multiway arrays, that is, hypermatrices with more than two dimensions (ways). Hyperspectral time series and multiangular acquisitions can be represented as a 3-way tensor. Here, we apply Canonical Polyadic tensor decomposition techniques to the blind analysis of hyperspectral big data. In order to do so, we use a novel compression-based nonnegative CP decomposition. We show that the proposed methodology can be interpreted as multilinear blind spectral unmixing, a higher order extension of the widely known spectral unmixing. In the proposed approach, the big hyperspectral tensor is decomposed in three sets of factors which can be interpreted as spectral signatures, their spatial distribution and temporal/angular changes. We provide experimental validation using a study case of the snow coverage of the French Alps during the snow season

    Variability of the endmembers in spectral unmixing: recent advances

    No full text
    International audienceEndmember variability has been identified as one of the main limitations of the usual Linear Mixing Model, conventionally used to perform spectral unmixing of hyperspectral data. The topic is currently receiving a lot of attention from the community, and many new algorithms have recently been developed to model this variability and take it into account. In this paper, we review state of the art methods dealing with this problem and classify them into three categories: the algorithms based on endmember bundles, the ones based on computational models, and the ones based on parametric physics-based models. We discuss the advantages and drawbacks of each category of methods and list some open problems and current challenges

    Tensor-based Hyperspectral Image Processing Methodology and its Applications in Impervious Surface and Land Cover Mapping

    Get PDF
    The emergence of hyperspectral imaging provides a new perspective for Earth observation, in addition to previously available orthophoto and multispectral imagery. This thesis focused on both the new data and new methodology in the field of hyperspectral imaging. First, the application of the future hyperspectral satellite EnMAP in impervious surface area (ISA) mapping was studied. During the search for the appropriate ISA mapping procedure for the new data, the subpixel classification based on nonnegative matrix factorization (NMF) achieved the best success. The simulated EnMAP image shows great potential in urban ISA mapping with over 85% accuracy. Unfortunately, the NMF based on the linear algebra only considers the spectral information and neglects the spatial information in the original image. The recent wide interest of applying the multilinear algebra in computer vision sheds light on this problem and raised the idea of nonnegative tensor factorization (NTF). This thesis found that the NTF has more advantages over the NMF when work with medium- rather than the high-spatial-resolution hyperspectral image. Furthermore, this thesis proposed to equip the NTF-based subpixel classification methods with the variations adopted from the NMF. By adopting the variations from the NMF, the urban ISA mapping results from the NTF were improved by ~2%. Lastly, the problem known as the curse of dimensionality is an obstacle in hyperspectral image applications. The majority of current dimension reduction (DR) methods are restricted to using only the spectral information, when the spatial information is neglected. To overcome this defect, two spectral-spatial methods: patch-based and tensor-patch-based, were thoroughly studied and compared in this thesis. To date, the popularity of the two solutions remains in computer vision studies and their applications in hyperspectral DR are limited. The patch-based and tensor-patch-based variations greatly improved the quality of dimension-reduced hyperspectral images, which then improved the land cover mapping results from them. In addition, this thesis proposed to use an improved method to produce an important intermediate result in the patch-based and tensor-patch-based DR process, which further improved the land cover mapping results

    Hyperspectral Remote Sensing Data Analysis and Future Challenges

    Full text link

    Modeling spatial and temporal variabilities in hyperspectral image unmixing

    Get PDF
    Acquired in hundreds of contiguous spectral bands, hyperspectral (HS) images have received an increasing interest due to the significant spectral information they convey about the materials present in a given scene. However, the limited spatial resolution of hyperspectral sensors implies that the observations are mixtures of multiple signatures corresponding to distinct materials. Hyperspectral unmixing is aimed at identifying the reference spectral signatures composing the data -- referred to as endmembers -- and their relative proportion in each pixel according to a predefined mixture model. In this context, a given material is commonly assumed to be represented by a single spectral signature. This assumption shows a first limitation, since endmembers may vary locally within a single image, or from an image to another due to varying acquisition conditions, such as declivity and possibly complex interactions between the incident light and the observed materials. Unless properly accounted for, spectral variability can have a significant impact on the shape and the amplitude of the acquired signatures, thus inducing possibly significant estimation errors during the unmixing process. A second limitation results from the significant size of HS data, which may preclude the use of batch estimation procedures commonly used in the literature, i.e., techniques exploiting all the available data at once. Such computational considerations notably become prominent to characterize endmember variability in multi-temporal HS (MTHS) images, i.e., sequences of HS images acquired over the same area at different time instants. The main objective of this thesis consists in introducing new models and unmixing procedures to account for spatial and temporal endmember variability. Endmember variability is addressed by considering an explicit variability model reminiscent of the total least squares problem, and later extended to account for time-varying signatures. The variability is first estimated using an unsupervised deterministic optimization procedure based on the Alternating Direction Method of Multipliers (ADMM). Given the sensitivity of this approach to abrupt spectral variations, a robust model formulated within a Bayesian framework is introduced. This formulation enables smooth spectral variations to be described in terms of spectral variability, and abrupt changes in terms of outliers. Finally, the computational restrictions induced by the size of the data is tackled by an online estimation algorithm. This work further investigates an asynchronous distributed estimation procedure to estimate the parameters of the proposed models

    Decomposability of Tensors

    Get PDF
    Tensor decomposition is a relevant topic, both for theoretical and applied mathematics, due to its interdisciplinary nature, which ranges from multilinear algebra and algebraic geometry to numerical analysis, algebraic statistics, quantum physics, signal processing, artificial intelligence, etc. The starting point behind the study of a decomposition relies on the idea that knowledge of elementary components of a tensor is fundamental to implement procedures that are able to understand and efficiently handle the information that a tensor encodes. Recent advances were obtained with a systematic application of geometric methods: secant varieties, symmetries of special decompositions, and an analysis of the geometry of finite sets. Thanks to new applications of theoretic results, criteria for understanding when a given decomposition is minimal or unique have been introduced or significantly improved. New types of decompositions, whose elementary blocks can be chosen in a range of different possible models (e.g., Chow decompositions or mixed decompositions), are now systematically studied and produce deeper insights into this topic. The aim of this Special Issue is to collect papers that illustrate some directions in which recent researches move, as well as to provide a wide overview of several new approaches to the problem of tensor decomposition

    Determining ground-level composition and concentration of particulate matter across regional areas using the Himawari-8 satellite

    Get PDF
    Speciated ground-level aerosol concentrations are required to understand and mitigate health impacts from dust storms, wildfires and other aerosol emissions. Globally, surface monitoring is limited due to cost and infrastructure demands. While remote sensing can help estimate respirable (i.e. ground level) concentrations, current observations are restricted by inadequate spatiotemporal resolution, uncertainty in aerosol type, particle size, and vertical profile. One key issue with current remote sensing datasets is that they are derived from reflectances observed by polar orbiting imagers, which means that aerosol is only derived during the daytime, and only once or twice per day. Sub-hourly, infrared (IR), geostationary data, such as the ten-minute data from Himawari-8, are required to monitor these events to ensure that sporadic dust events can be continually observed and quantified. Newer quantification methods using geostationary data have focussed on detecting the presence, or absence, of a dust event. However, limited attention has been paid to the determination of composition, and particle size, using IR wavelengths exclusively. More appropriate IR methods are required to quantify and classify aerosol composition in order to improve the understanding of source impacts. The primary research objectives were investigated through a series of scientific papers centred on aspects deemed critical to successfully determining ground-level concentrations. A literature review of surface particulate monitoring of dust events using geostationary satellite remote sensing was undertaken to understand the theory and limitations in the current methodology. The review identified (amongst other findings) the reliance on visible wavelengths and the lack of temporal resolution in polar-orbiting satellite data. As a result of this, a duststorm was investigated to determine how rapidly the storm passed and what temporal data resolution is required to monitor these and other similar events. Various IR dust indices were investigated to determine which are optimum for determining spectral change. These indices were then used to qualify and quantitate dust events, and the methodology was validated against three severe air quality events of a dust storm; smoke from prescribed burns; and an ozone smog incident. The study identified that continuous geostationary temporal resolution is critical in the determination of concentration. The Himawari-8 spatial resolution of 2 km is slightly coarse and further spatial aggregation or cloud masking would be detrimental to determining concentrations. Five dual-band BTD combinations, using all IR wavelengths, maximises the identification of compositional differences, atmospheric stability, and cloud cover and this improves the estimated accuracy. Preliminary validation suggests that atmospheric stability, cloud height, relative humidity, PM2.5, PM10, NO, NO2, and O3 appear to produce plausible plumes but that aerosol speciation (soil, sea-spray, fires, vehicles, and secondary sulfates) and SO2 require further investigation. The research described in the thesis details the processes adopted for the development and implementation of an integrated approach to using geostationary remote sensing data to quantify population exposure (who), qualify the concentration and composition (what), assess the temporal (when) and spatial (where) concentration distributions, to determine the source (why) of aerosols contribution to resulting ground-level concentration

    Multilinear spectral unmixing of hyperspectral multiangle images

    No full text
    International audienceSpectral unmixing is one of the most important and studied topics in hyperspectral image analysis. By means of spectral unmixing it is possible to decompose a hyperspectral image in its spectral components, the so-called endmembers, and their respective fractional spatial distributions, so-called abundance maps. New hyperspectral missions will allow to acquire hyperspectral images in new ways, for instance, in temporal series or in multi-angular acquisitions. Working with these incoming huge databases of multi-way hyperspec-tral images will raise new challenges to the hyperspectral community. Here, we propose the use of compression-based non-negative tensor canonical polyadic (CP) decompositions to analyze this kind of datasets. Furthermore, we show that the non-negative CP decomposition could be understood as a multi-linear spectral unmixing technique. We evaluate the proposed approach by means of Mars synthetic datasets built upon multi-angular in-lab hyperspectral acquisitions
    corecore