2,303 research outputs found
Iterative Log Thresholding
Sparse reconstruction approaches using the re-weighted l1-penalty have been
shown, both empirically and theoretically, to provide a significant improvement
in recovering sparse signals in comparison to the l1-relaxation. However,
numerical optimization of such penalties involves solving problems with
l1-norms in the objective many times. Using the direct link of reweighted
l1-penalties to the concave log-regularizer for sparsity, we derive a simple
prox-like algorithm for the log-regularized formulation. The proximal splitting
step of the algorithm has a closed form solution, and we call the algorithm
'log-thresholding' in analogy to soft thresholding for the l1-penalty.
We establish convergence results, and demonstrate that log-thresholding
provides more accurate sparse reconstructions compared to both soft and hard
thresholding. Furthermore, the approach can be directly extended to
optimization over matrices with penalty for rank (i.e. the nuclear norm penalty
and its re-weigthed version), where we suggest a singular-value
log-thresholding approach.Comment: 5 pages, 4 figure
Iterative Reweighted Algorithms for Sparse Signal Recovery with Temporally Correlated Source Vectors
Iterative reweighted algorithms, as a class of algorithms for sparse signal
recovery, have been found to have better performance than their non-reweighted
counterparts. However, for solving the problem of multiple measurement vectors
(MMVs), all the existing reweighted algorithms do not account for temporal
correlation among source vectors and thus their performance degrades
significantly in the presence of correlation. In this work we propose an
iterative reweighted sparse Bayesian learning (SBL) algorithm exploiting the
temporal correlation, and motivated by it, we propose a strategy to improve
existing reweighted algorithms for the MMV problem, i.e. replacing
their row norms with Mahalanobis distance measure. Simulations show that the
proposed reweighted SBL algorithm has superior performance, and the proposed
improvement strategy is effective for existing reweighted algorithms.Comment: Accepted by ICASSP 201
Distributed Reconstruction of Nonlinear Networks: An ADMM Approach
In this paper, we present a distributed algorithm for the reconstruction of
large-scale nonlinear networks. In particular, we focus on the identification
from time-series data of the nonlinear functional forms and associated
parameters of large-scale nonlinear networks. Recently, a nonlinear network
reconstruction problem was formulated as a nonconvex optimisation problem based
on the combination of a marginal likelihood maximisation procedure with
sparsity inducing priors. Using a convex-concave procedure (CCCP), an iterative
reweighted lasso algorithm was derived to solve the initial nonconvex
optimisation problem. By exploiting the structure of the objective function of
this reweighted lasso algorithm, a distributed algorithm can be designed. To
this end, we apply the alternating direction method of multipliers (ADMM) to
decompose the original problem into several subproblems. To illustrate the
effectiveness of the proposed methods, we use our approach to identify a
network of interconnected Kuramoto oscillators with different network sizes
(500~100,000 nodes).Comment: To appear in the Preprints of 19th IFAC World Congress 201
Schatten- Quasi-Norm Regularized Matrix Optimization via Iterative Reweighted Singular Value Minimization
In this paper we study general Schatten- quasi-norm (SPQN) regularized
matrix minimization problems. In particular, we first introduce a class of
first-order stationary points for them, and show that the first-order
stationary points introduced in [11] for an SPQN regularized
minimization problem are equivalent to those of an SPQN regularized
minimization reformulation. We also show that any local minimizer of the SPQN
regularized matrix minimization problems must be a first-order stationary
point. Moreover, we derive lower bounds for nonzero singular values of the
first-order stationary points and hence also of the local minimizers of the
SPQN regularized matrix minimization problems. The iterative reweighted
singular value minimization (IRSVM) methods are then proposed to solve these
problems, whose subproblems are shown to have a closed-form solution. In
contrast to the analogous methods for the SPQN regularized
minimization problems, the convergence analysis of these methods is
significantly more challenging. We develop a novel approach to establishing the
convergence of these methods, which makes use of the expression of a specific
solution of their subproblems and avoids the intricate issue of finding the
explicit expression for the Clarke subdifferential of the objective of their
subproblems. In particular, we show that any accumulation point of the sequence
generated by the IRSVM methods is a first-order stationary point of the
problems. Our computational results demonstrate that the IRSVM methods
generally outperform some recently developed state-of-the-art methods in terms
of solution quality and/or speed.Comment: This paper has been withdrawn by the author due to major revision and
correction
- …