3,180 research outputs found
Covariance Eigenvector Sparsity for Compression and Denoising
Sparsity in the eigenvectors of signal covariance matrices is exploited in
this paper for compression and denoising. Dimensionality reduction (DR) and
quantization modules present in many practical compression schemes such as
transform codecs, are designed to capitalize on this form of sparsity and
achieve improved reconstruction performance compared to existing
sparsity-agnostic codecs. Using training data that may be noisy a novel
sparsity-aware linear DR scheme is developed to fully exploit sparsity in the
covariance eigenvectors and form noise-resilient estimates of the principal
covariance eigenbasis. Sparsity is effected via norm-one regularization, and
the associated minimization problems are solved using computationally efficient
coordinate descent iterations. The resulting eigenspace estimator is shown
capable of identifying a subset of the unknown support of the eigenspace basis
vectors even when the observation noise covariance matrix is unknown, as long
as the noise power is sufficiently low. It is proved that the sparsity-aware
estimator is asymptotically normal, and the probability to correctly identify
the signal subspace basis support approaches one, as the number of training
data grows large. Simulations using synthetic data and images, corroborate that
the proposed algorithms achieve improved reconstruction quality relative to
alternatives.Comment: IEEE Transcations on Signal Processing, 2012 (to appear
Low-Complexity OFDM Spectral Precoding
This paper proposes a new large-scale mask-compliant spectral precoder
(LS-MSP) for orthogonal frequency division multiplexing systems. In this paper,
we first consider a previously proposed mask-compliant spectral precoding
scheme that utilizes a generic convex optimization solver which suffers from
high computational complexity, notably in large-scale systems. To mitigate the
complexity of computing the LS-MSP, we propose a divide-and-conquer approach
that breaks the original problem into smaller rank 1 quadratic-constraint
problems and each small problem yields closed-form solution. Based on these
solutions, we develop three specialized first-order low-complexity algorithms,
based on 1) projection on convex sets and 2) the alternating direction method
of multipliers. We also develop an algorithm that capitalizes on the
closed-form solutions for the rank 1 quadratic constraints, which is referred
to as 3) semi-analytical spectral precoding. Numerical results show that the
proposed LS-MSP techniques outperform previously proposed techniques in terms
of the computational burden while complying with the spectrum mask. The results
also indicate that 3) typically needs 3 iterations to achieve similar results
as 1) and 2) at the expense of a slightly increased computational complexity.Comment: Accepted in IEEE International Workshop on Signal Processing Advances
in Wireless Communications (SPAWC), 201
A Unified Framework for Sparse Non-Negative Least Squares using Multiplicative Updates and the Non-Negative Matrix Factorization Problem
We study the sparse non-negative least squares (S-NNLS) problem. S-NNLS
occurs naturally in a wide variety of applications where an unknown,
non-negative quantity must be recovered from linear measurements. We present a
unified framework for S-NNLS based on a rectified power exponential scale
mixture prior on the sparse codes. We show that the proposed framework
encompasses a large class of S-NNLS algorithms and provide a computationally
efficient inference procedure based on multiplicative update rules. Such update
rules are convenient for solving large sets of S-NNLS problems simultaneously,
which is required in contexts like sparse non-negative matrix factorization
(S-NMF). We provide theoretical justification for the proposed approach by
showing that the local minima of the objective function being optimized are
sparse and the S-NNLS algorithms presented are guaranteed to converge to a set
of stationary points of the objective function. We then extend our framework to
S-NMF, showing that our framework leads to many well known S-NMF algorithms
under specific choices of prior and providing a guarantee that a popular
subclass of the proposed algorithms converges to a set of stationary points of
the objective function. Finally, we study the performance of the proposed
approaches on synthetic and real-world data.Comment: To appear in Signal Processin
BPGrad: Towards Global Optimality in Deep Learning via Branch and Pruning
Understanding the global optimality in deep learning (DL) has been attracting
more and more attention recently. Conventional DL solvers, however, have not
been developed intentionally to seek for such global optimality. In this paper
we propose a novel approximation algorithm, BPGrad, towards optimizing deep
models globally via branch and pruning. Our BPGrad algorithm is based on the
assumption of Lipschitz continuity in DL, and as a result it can adaptively
determine the step size for current gradient given the history of previous
updates, wherein theoretically no smaller steps can achieve the global
optimality. We prove that, by repeating such branch-and-pruning procedure, we
can locate the global optimality within finite iterations. Empirically an
efficient solver based on BPGrad for DL is proposed as well, and it outperforms
conventional DL solvers such as Adagrad, Adadelta, RMSProp, and Adam in the
tasks of object recognition, detection, and segmentation
- …