28 research outputs found

    Endovascular treatment of complex abdominal aortic aneurysm disease

    Get PDF

    Hyperspectral Unmixing Using a Neural Network Autoencoder

    Get PDF
    In this paper, we present a deep learning based method for blind hyperspectral unmixing in the form of a neural network autoencoder. We show that the linear mixture model implicitly puts certain architectural constraints on the network, and it effectively performs blind hyperspectral unmixing. Several different architectural configurations of both shallow and deep encoders are evaluated. Also, deep encoders are tested using different activation functions. Furthermore, we investigate the performance of the method using three different objective functions. The proposed method is compared to other benchmark methods using real data and previously established ground truths of several common data sets. Experiments show that the proposed method compares favorably to other commonly used hyperspectral unmixing methods and exhibits robustness to noise. This is especially true when using spectral angle distance as the network's objective function. Finally, results indicate that a deeper and a more sophisticated encoder does not necessarily give better results.This work was supported in part by the Icelandic Research Fund under Grant 174075-05 and in part by the Postdoctoral Research Fund at the University of Iceland.Peer Reviewe

    Synthesis of Synthetic Hyperspectral Images with Controllable Spectral Variability Using a Generative Adversarial Network

    No full text
    In hyperspectral unmixing (HU), spectral variability in hyperspectral images (HSIs) is a major challenge which has received a lot of attention over the last few years. Here, we propose a method utilizing a generative adversarial network (GAN) for creating synthetic HSIs having a controllable degree of realistic spectral variability from existing HSIs with established ground truth abundance maps. Such synthetic images can be a valuable tool when developing HU methods that can deal with spectral variability. We use a variational autoencoder (VAE) to investigate how the variability in the synthesized images differs from the original images and perform blind unmixing experiments on the generated images to illustrate the effect of increasing the variability

    Hyperspectral Image Denoising Using Spectral-Spatial Transform-Based Sparse and Low-Rank Representations

    No full text
    International audienceThis article proposes a denoising method based on sparse spectral–spatial and low-rank representations (SSSLRR) using the 3-D orthogonal transform (3-DOT). SSSLRR can be effectively used to remove the Gaussian and mixed noise. SSSLRR uses 3-DOT to decompose noisy HSI to sparse transform coefficients. The 3-D discrete orthogonal wavelet transform (3-D DWT) is a representative 3-DOT suitable for denoising since it concentrates on the signal in few transform coefficients, and the 3-D discrete orthogonal cosine transform (3-D DCT) is another example. An SSSLRR using 3-D DWT will be called SSSLRR-DWT. SSSLRR-DWT is an iterative algorithm based on the alternating direction method of multipliers (ADMM) that uses sparse and nuclear norm penalties. We use an ablation study to show the effectiveness of the penalties we employ in the method. Both simulated and real hyperspectral datasets demonstrate that SSSLRR outperforms other comparative methods in quantitative and visual assessments to remove the Gaussian and mixed noise

    Sentinel-2 Image Fusion Using a Deep Residual Network

    Get PDF
    Single sensor fusion is the fusion of two or more spectrally disjoint reflectance bands that have different spatial resolution and have been acquired by the same sensor. An example is Sentinel-2, a constellation of two satellites, which can acquire multispectral bands of 10 m, 20 m and 60 m resolution for visible, near infrared (NIR) and shortwave infrared (SWIR). In this paper, we present a method to fuse the fine and coarse spatial resolution bands to obtain finer spatial resolution versions of the coarse bands. It is based on a deep convolutional neural network which has a residual design that models the fusion problem. The residual architecture helps the network to converge faster and allows for deeper networks by relieving the network of having to learn the coarse spatial resolution part of the inputs, enabling it to focus on constructing the missing fine spatial details. Using several real Sentinel-2 datasets, we study the effects of the most important hyperparameters on the quantitative quality of the fused image, compare the method to several state-of-the-art methods and demonstrate that it outperforms the comparison methods in experiments

    Unsupervised and Supervised Feature Extraction Methods for Hyperspectral Images Based on Mixtures of Factor Analyzers

    Get PDF
    International audienceThis paper proposes three feature extraction (FE) methods based on density estimation for hyperspectral images (HSIs). The methods are a mixture of factor analyzers (MFA), deep MFA (DMFA), and supervised MFA (SMFA). The MFA extends the Gaussian mixture model to allow a low-dimensionality representation of the Gaussians. DMFA is a deep version of MFA and consists of a two-layer MFA, i.e, samples from the posterior distribution at the first layer are input to an MFA model at the second layer. SMFA consists of single-layer MFA and exploits labeled information to extract features of HSI effectively. Based on these three FE methods, the paper also proposes a framework that automatically extracts the most important features for classification from an HSI. The overall accuracy of a classifier is used to automatically choose the optimal number of features and hence performs dimensionality reduction (DR) before HSI classification. The performance of MFA, DMFA, and SMFA FE methods are evaluated and compared to five different types of unsupervised and supervised FE methods by using four real HSIs datasets

    Multispectral and Hyperspectral Image Fusion Using a 3-D-Convolutional Neural Network

    No full text
    corecore