205 research outputs found
Stable, Robust and Super Fast Reconstruction of Tensors Using Multi-Way Projections
In the framework of multidimensional Compressed Sensing (CS), we introduce an
analytical reconstruction formula that allows one to recover an th-order
data tensor
from a reduced set of multi-way compressive measurements by exploiting its low
multilinear-rank structure. Moreover, we show that, an interesting property of
multi-way measurements allows us to build the reconstruction based on
compressive linear measurements taken only in two selected modes, independently
of the tensor order . In addition, it is proved that, in the matrix case and
in a particular case with rd-order tensors where the same 2D sensor operator
is applied to all mode-3 slices, the proposed reconstruction
is stable in the sense that the approximation
error is comparable to the one provided by the best low-multilinear-rank
approximation, where is a threshold parameter that controls the
approximation error. Through the analysis of the upper bound of the
approximation error we show that, in the 2D case, an optimal value for the
threshold parameter exists, which is confirmed by our
simulation results. On the other hand, our experiments on 3D datasets show that
very good reconstructions are obtained using , which means that this
parameter does not need to be tuned. Our extensive simulation results
demonstrate the stability and robustness of the method when it is applied to
real-world 2D and 3D signals. A comparison with state-of-the-arts sparsity
based CS methods specialized for multidimensional signals is also included. A
very attractive characteristic of the proposed method is that it provides a
direct computation, i.e. it is non-iterative in contrast to all existing
sparsity based CS algorithms, thus providing super fast computations, even for
large datasets.Comment: Submitted to IEEE Transactions on Signal Processin
Exploiting Structural Complexity for Robust and Rapid Hyperspectral Imaging
This paper presents several strategies for spectral de-noising of
hyperspectral images and hypercube reconstruction from a limited number of
tomographic measurements. In particular we show that the non-noisy spectral
data, when stacked across the spectral dimension, exhibits low-rank. On the
other hand, under the same representation, the spectral noise exhibits a banded
structure. Motivated by this we show that the de-noised spectral data and the
unknown spectral noise and the respective bands can be simultaneously estimated
through the use of a low-rank and simultaneous sparse minimization operation
without prior knowledge of the noisy bands. This result is novel for for
hyperspectral imaging applications. In addition, we show that imaging for the
Computed Tomography Imaging Systems (CTIS) can be improved under limited angle
tomography by using low-rank penalization. For both of these cases we exploit
the recent results in the theory of low-rank matrix completion using nuclear
norm minimization
Joint Sensing Matrix and Sparsifying Dictionary Optimization for Tensor Compressive Sensing.
Tensor compressive sensing (TCS) is a multidimensional framework of compressive sensing (CS), and it is
advantageous in terms of reducing the amount of storage, easing
hardware implementations, and preserving multidimensional
structures of signals in comparison to a conventional CS system.
In a TCS system, instead of using a random sensing matrix and
a predefined dictionary, the average-case performance can be
further improved by employing an optimized multidimensional
sensing matrix and a learned multilinear sparsifying dictionary.
In this paper, we propose an approach that jointly optimizes
the sensing matrix and dictionary for a TCS system. For the
sensing matrix design in TCS, an extended separable approach
with a closed form solution and a novel iterative nonseparable
method are proposed when the multilinear dictionary is fixed.
In addition, a multidimensional dictionary learning method that
takes advantages of the multidimensional structure is derived,
and the influence of sensing matrices is taken into account in the
learning process. A joint optimization is achieved via alternately
iterating the optimization of the sensing matrix and dictionary.
Numerical experiments using both synthetic data and real images
demonstrate the superiority of the proposed approache
Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)
The implicit objective of the biennial "international - Traveling Workshop on
Interactions between Sparse models and Technology" (iTWIST) is to foster
collaboration between international scientific teams by disseminating ideas
through both specific oral/poster presentations and free discussions. For its
second edition, the iTWIST workshop took place in the medieval and picturesque
town of Namur in Belgium, from Wednesday August 27th till Friday August 29th,
2014. The workshop was conveniently located in "The Arsenal" building within
walking distance of both hotels and town center. iTWIST'14 has gathered about
70 international participants and has featured 9 invited talks, 10 oral
presentations, and 14 posters on the following themes, all related to the
theory, application and generalization of the "sparsity paradigm":
Sparsity-driven data sensing and processing; Union of low dimensional
subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph
sensing/processing; Blind inverse problems and dictionary learning; Sparsity
and computational neuroscience; Information theory, geometry and randomness;
Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?;
Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website:
http://sites.google.com/site/itwist1
Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis
The widespread use of multi-sensor technology and the emergence of big
datasets has highlighted the limitations of standard flat-view matrix models
and the necessity to move towards more versatile data analysis tools. We show
that higher-order tensors (i.e., multiway arrays) enable such a fundamental
paradigm shift towards models that are essentially polynomial and whose
uniqueness, unlike the matrix methods, is guaranteed under verymild and natural
conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical
backbone, data analysis techniques using tensor decompositions are shown to
have great flexibility in the choice of constraints that match data properties,
and to find more general latent components in the data than matrix-based
methods. A comprehensive introduction to tensor decompositions is provided from
a signal processing perspective, starting from the algebraic foundations, via
basic Canonical Polyadic and Tucker models, through to advanced cause-effect
and multi-view data analysis schemes. We show that tensor decompositions enable
natural generalizations of some commonly used signal processing paradigms, such
as canonical correlation and subspace techniques, signal separation, linear
regression, feature extraction and classification. We also cover computational
aspects, and point out how ideas from compressed sensing and scientific
computing may be used for addressing the otherwise unmanageable storage and
manipulation problems associated with big datasets. The concepts are supported
by illustrative real world case studies illuminating the benefits of the tensor
framework, as efficient and promising tools for modern signal processing, data
analysis and machine learning applications; these benefits also extend to
vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker
decomposition, HOSVD, tensor networks, Tensor Train
Iterative enhanced multivariance products representation for effective compression of hyperspectral images.
Effective compression of hyperspectral (HS) images is essential due to their large data volume. Since these images are high dimensional, processing them is also another challenging issue. In this work, an efficient lossy HS image compression method based on enhanced multivariance products representation (EMPR) is proposed. As an efficient data decomposition method, EMPR enables us to represent the given multidimensional data with lower-dimensional entities. EMPR, as a finite expansion with relevant approximations, can be acquired by truncating this expansion at certain levels. Thus, EMPR can be utilized as a highly effective lossy compression algorithm for hyper spectral images. In addition to these, an efficient variety of EMPR is also introduced in this article, in order to increase the compression efficiency. The results are benchmarked with several state-of-the-art lossy compression methods. It is observed that both higher peak signal-to-noise ratio values and improved classification accuracy are achieved from EMPR-based methods
Nonlocal tensor sparse representation and low-rank regularization for hyperspectral image compressive sensing reconstruction
Hyperspectral image compressive sensing reconstruction (HSI-CSR) is an important issue in remote sensing, and has recently been investigated increasingly by the sparsity prior based approaches. However, most of the available HSI-CSR methods consider the sparsity prior in spatial and spectral vector domains via vectorizing hyperspectral cubes along a certain dimension. Besides, in most previous works, little attention has been paid to exploiting the underlying nonlocal structure in spatial domain of the HSI. In this paper, we propose a nonlocal tensor sparse and low-rank regularization (NTSRLR) approach, which can encode essential structured sparsity of an HSI and explore its advantages for HSI-CSR task. Specifically, we study how to utilize reasonably the l1 -based sparsity of core tensor and tensor nuclear norm function as tensor sparse and low-rank regularization, respectively, to describe the nonlocal spatial-spectral correlation hidden in an HSI. To study the minimization problem of the proposed algorithm, we design a fast implementation strategy based on the alternative direction multiplier method (ADMM) technique. Experimental results on various HSI datasets verify that the proposed HSI-CSR algorithm can significantly outperform existing state-of-the-art CSR techniques for HSI recovery
- …