318 research outputs found
Nonparametric Simultaneous Sparse Recovery: an Application to Source Localization
We consider multichannel sparse recovery problem where the objective is to
find good recovery of jointly sparse unknown signal vectors from the given
multiple measurement vectors which are different linear combinations of the
same known elementary vectors. Many popular greedy or convex algorithms perform
poorly under non-Gaussian heavy-tailed noise conditions or in the face of
outliers. In this paper, we propose the usage of mixed norms on
data fidelity (residual matrix) term and the conventional -norm
constraint on the signal matrix to promote row-sparsity. We devise a greedy
pursuit algorithm based on simultaneous normalized iterative hard thresholding
(SNIHT) algorithm. Simulation studies highlight the effectiveness of the
proposed approaches to cope with different noise environments (i.i.d., row
i.i.d, etc) and outliers. Usefulness of the methods are illustrated in source
localization application with sensor arrays.Comment: Paper appears in Proc. European Signal Processing Conference
(EUSIPCO'15), Nice, France, Aug 31 -- Sep 4, 201
Scalable Algorithms for Tractable Schatten Quasi-Norm Minimization
The Schatten-p quasi-norm is usually used to replace the standard
nuclear norm in order to approximate the rank function more accurately.
However, existing Schatten-p quasi-norm minimization algorithms involve
singular value decomposition (SVD) or eigenvalue decomposition (EVD) in each
iteration, and thus may become very slow and impractical for large-scale
problems. In this paper, we first define two tractable Schatten quasi-norms,
i.e., the Frobenius/nuclear hybrid and bi-nuclear quasi-norms, and then prove
that they are in essence the Schatten-2/3 and 1/2 quasi-norms, respectively,
which lead to the design of very efficient algorithms that only need to update
two much smaller factor matrices. We also design two efficient proximal
alternating linearized minimization algorithms for solving representative
matrix completion problems. Finally, we provide the global convergence and
performance guarantees for our algorithms, which have better convergence
properties than existing algorithms. Experimental results on synthetic and
real-world data show that our algorithms are more accurate than the
state-of-the-art methods, and are orders of magnitude faster.Comment: 16 pages, 5 figures, Appears in Proceedings of the 30th AAAI
Conference on Artificial Intelligence (AAAI), Phoenix, Arizona, USA, pp.
2016--2022, 201
Non-convex regularization in remote sensing
In this paper, we study the effect of different regularizers and their
implications in high dimensional image classification and sparse linear
unmixing. Although kernelization or sparse methods are globally accepted
solutions for processing data in high dimensions, we present here a study on
the impact of the form of regularization used and its parametrization. We
consider regularization via traditional squared (2) and sparsity-promoting (1)
norms, as well as more unconventional nonconvex regularizers (p and Log Sum
Penalty). We compare their properties and advantages on several classification
and linear unmixing tasks and provide advices on the choice of the best
regularizer for the problem at hand. Finally, we also provide a fully
functional toolbox for the community.Comment: 11 pages, 11 figure
- …