948 research outputs found
Unsupervised Sparse Dirichlet-Net for Hyperspectral Image Super-Resolution
In many computer vision applications, obtaining images of high resolution in
both the spatial and spectral domains are equally important. However, due to
hardware limitations, one can only expect to acquire images of high resolution
in either the spatial or spectral domains. This paper focuses on hyperspectral
image super-resolution (HSI-SR), where a hyperspectral image (HSI) with low
spatial resolution (LR) but high spectral resolution is fused with a
multispectral image (MSI) with high spatial resolution (HR) but low spectral
resolution to obtain HR HSI. Existing deep learning-based solutions are all
supervised that would need a large training set and the availability of HR HSI,
which is unrealistic. Here, we make the first attempt to solving the HSI-SR
problem using an unsupervised encoder-decoder architecture that carries the
following uniquenesses. First, it is composed of two encoder-decoder networks,
coupled through a shared decoder, in order to preserve the rich spectral
information from the HSI network. Second, the network encourages the
representations from both modalities to follow a sparse Dirichlet distribution
which naturally incorporates the two physical constraints of HSI and MSI.
Third, the angular difference between representations are minimized in order to
reduce the spectral distortion. We refer to the proposed architecture as
unsupervised Sparse Dirichlet-Net, or uSDN. Extensive experimental results
demonstrate the superior performance of uSDN as compared to the
state-of-the-art.Comment: Accepted by The IEEE Conference on Computer Vision and Pattern
Recognition (CVPR 2018, Spotlight
Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis
The widespread use of multi-sensor technology and the emergence of big
datasets has highlighted the limitations of standard flat-view matrix models
and the necessity to move towards more versatile data analysis tools. We show
that higher-order tensors (i.e., multiway arrays) enable such a fundamental
paradigm shift towards models that are essentially polynomial and whose
uniqueness, unlike the matrix methods, is guaranteed under verymild and natural
conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical
backbone, data analysis techniques using tensor decompositions are shown to
have great flexibility in the choice of constraints that match data properties,
and to find more general latent components in the data than matrix-based
methods. A comprehensive introduction to tensor decompositions is provided from
a signal processing perspective, starting from the algebraic foundations, via
basic Canonical Polyadic and Tucker models, through to advanced cause-effect
and multi-view data analysis schemes. We show that tensor decompositions enable
natural generalizations of some commonly used signal processing paradigms, such
as canonical correlation and subspace techniques, signal separation, linear
regression, feature extraction and classification. We also cover computational
aspects, and point out how ideas from compressed sensing and scientific
computing may be used for addressing the otherwise unmanageable storage and
manipulation problems associated with big datasets. The concepts are supported
by illustrative real world case studies illuminating the benefits of the tensor
framework, as efficient and promising tools for modern signal processing, data
analysis and machine learning applications; these benefits also extend to
vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker
decomposition, HOSVD, tensor networks, Tensor Train
Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images
In hyperspectral remote sensing data mining, it is important to take into
account of both spectral and spatial information, such as the spectral
signature, texture feature and morphological property, to improve the
performances, e.g., the image classification accuracy. In a feature
representation point of view, a nature approach to handle this situation is to
concatenate the spectral and spatial features into a single but high
dimensional vector and then apply a certain dimension reduction technique
directly on that concatenated vector before feed it into the subsequent
classifier. However, multiple features from various domains definitely have
different physical meanings and statistical properties, and thus such
concatenation hasn't efficiently explore the complementary properties among
different features, which should benefit for boost the feature
discriminability. Furthermore, it is also difficult to interpret the
transformed results of the concatenated vector. Consequently, finding a
physically meaningful consensus low dimensional feature representation of
original multiple features is still a challenging task. In order to address the
these issues, we propose a novel feature learning framework, i.e., the
simultaneous spectral-spatial feature selection and extraction algorithm, for
hyperspectral images spectral-spatial feature representation and
classification. Specifically, the proposed method learns a latent low
dimensional subspace by projecting the spectral-spatial feature into a common
feature space, where the complementary information has been effectively
exploited, and simultaneously, only the most significant original features have
been transformed. Encouraging experimental results on three public available
hyperspectral remote sensing datasets confirm that our proposed method is
effective and efficient
- …