129 research outputs found

    Spectral-Spatial Classification of Hyperspectral Data based on a Stochastic Minimum Spanning Forest Approach

    Get PDF
    International audienceIn this paper, a new method for supervised hyperspectral data classification is proposed. In particular, the notion of stochastic Minimum Spanning Forest (MSF) is introduced. For a given hyperspectral image, a pixelwise classification is first performed. From this classification map, M marker maps are generated by randomly selecting pixels and labeling them as markers for the construction of Minimum Spanning Forests. The next step consists in building an MSF from each of the M marker maps. Finally, all the M realizations are aggregated with a maximum vote decision rule, in order to build the final classification map. The proposed method is tested on three different data sets of hyperspectral airborne images with different resolutions and contexts. The influence of the number of markers and of the number of realizations M on the results are investigated in experiments. The performance of the proposed method is compared to several classification techniques (both pixelwise and spectral-spatial) using standard quantitative criteria and visual qualitative evaluation

    Use of the ARM Measurements of Spectral Zenith Radiance for Better Understanding of 3D Cloud-Radiation Processes & Aerosol-Cloud Interaction

    Get PDF
    We proposed a variety of tasks centered on the following question: what can we learn about 3D cloud-radiation processes and aerosol-cloud interaction from rapid-sampling ARM measurements of spectral zenith radiance? These ARM measurements offer spectacular new and largely unexploited capabilities in both the temporal and spectral domains. Unlike most other ARM instruments, which average over many seconds or take samples many seconds apart, the new spectral zenith radiance measurements are fast enough to resolve natural time scales of cloud change and cloud boundaries as well as the transition zone between cloudy and clear areas. In the case of the shortwave spectrometer, the measurements offer high time resolution and high spectral resolution, allowing new discovery-oriented science which we intend to pursue vigorously. Research objectives are, for convenience, grouped under three themes: ⢠Understand radiative signature of the transition zone between cloud-free and cloudy areas using data from ARM shortwave radiometers, which has major climatic consequences in both aerosol direct and indirect effect studies. ⢠Provide cloud property retrievals from the ARM sites and the ARM Mobile Facility for studies of aerosol-cloud interactions. ⢠Assess impact of 3D cloud structures on aerosol properties using passive and active remote sensing techniques from both ARM and satellite measurements

    Stochastic Feature Selection with Distributed Feature Spacing for Hyperspectral Data

    Get PDF
    Feature subset selection is a well studied problem in machine learning. One short-coming of many methods is the selection of highly correlated features; a characteristic of hyperspectral data. A novel stochastic feature selection method with three major components is presented. First, we present an optimized feature selection method that maximizes a heuristic using a simulated annealing search which increases the chance of avoiding locally optimum solutions. Second, we exploit local cross correlation pair-wise amongst classes of interest to select suitable features for class discrimination. Third, we adopt the concept of distributed spacing from the multi-objective optimization community to distribute features across the spectrum in order to select less correlated features. The classification performance of our semi-embedded feature selection and classification method is demonstrated on a 12-class textile hyperspectral classification problem under several noise realizations. These results are compared with a variety of feature selection methods that cover a broad range of approaches. Abstract © IEE

    A New Extended Linear Mixing Model to Address Spectral Variability

    No full text
    International audienceSpectral variability is a phenomenon due, to a grand extend, to variations in the illumination and atmospheric conditions within a hyperspectral image, causing the spectral signature of a material to vary within a image. Data spectral fluctuation due to spectral variability compromises the linear mixing model (LMM) sum-to-one constraint, and is an important source of error in hyperspectral image analysis. Recently, spectral variability has raised more attention and some techniques have been proposed to address this issue, i.e. spectral bundles. Here, we propose the definition of an extended LMM (ELMM) to model spectral variability and we show that the use of spectral bundles models the ELMM implicitly. We also show that the constrained least squares (CLS) is an explicit modelling of the ELMM when the spectral variability is due to scaling effects. We give experimental validation that spectral bundles (and sparsity) and CLS are complementary techniques addressing spectral variability. We finally discuss on future research avenues to fully exploit the proposed ELMM

    Hyperspectral Unmixing Using a Neural Network Autoencoder

    Get PDF
    In this paper, we present a deep learning based method for blind hyperspectral unmixing in the form of a neural network autoencoder. We show that the linear mixture model implicitly puts certain architectural constraints on the network, and it effectively performs blind hyperspectral unmixing. Several different architectural configurations of both shallow and deep encoders are evaluated. Also, deep encoders are tested using different activation functions. Furthermore, we investigate the performance of the method using three different objective functions. The proposed method is compared to other benchmark methods using real data and previously established ground truths of several common data sets. Experiments show that the proposed method compares favorably to other commonly used hyperspectral unmixing methods and exhibits robustness to noise. This is especially true when using spectral angle distance as the network's objective function. Finally, results indicate that a deeper and a more sophisticated encoder does not necessarily give better results.This work was supported in part by the Icelandic Research Fund under Grant 174075-05 and in part by the Postdoctoral Research Fund at the University of Iceland.Peer Reviewe

    Hyperspectral super-resolution of locally low rank images from complementary multisource data

    Get PDF
    International audienceRemote sensing hyperspectral images (HSI) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images (MSI) in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods decrease mainly because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSI are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution via local dictionary learning using endmember induction algorithms (HSR-LDL-EIA). We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data

    Could the Icelandic banking collapse of 2008 have been prevented? The role of economists prior to the crisis

    Get PDF
    In 2008, the three main banks in Iceland collapsed. There is strong evidence that the banks would have become insolvent even without the subprime crisis. Yet, there was a marked difference in opinion at the time about the viability of the Icelandic banks. A clean bill of health was given by the commissioned reports of Mishkin in 2007 and Portes in 2008, just prior to the collapse, whereas severe reservations about the Icelandic financial system were expressed by Wade, inter alios. These contrasting views were widely debated and may well have influenced both potential and actual foreign depositors in the banks. This paper analyses the disparate arguments put forward and contrasts it with the actual outcome. It considers the influence of economists in public policy debates and draws some methodological conclusions.This is the author accepted manuscript. The final version is available from Edward Elgar Publishing via https://doi.org/10.4337/ejeep.2016.03.0

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin
    corecore