75,037 research outputs found
A Stochastic Majorize-Minimize Subspace Algorithm for Online Penalized Least Squares Estimation
Stochastic approximation techniques play an important role in solving many
problems encountered in machine learning or adaptive signal processing. In
these contexts, the statistics of the data are often unknown a priori or their
direct computation is too intensive, and they have thus to be estimated online
from the observed signals. For batch optimization of an objective function
being the sum of a data fidelity term and a penalization (e.g. a sparsity
promoting function), Majorize-Minimize (MM) methods have recently attracted
much interest since they are fast, highly flexible, and effective in ensuring
convergence. The goal of this paper is to show how these methods can be
successfully extended to the case when the data fidelity term corresponds to a
least squares criterion and the cost function is replaced by a sequence of
stochastic approximations of it. In this context, we propose an online version
of an MM subspace algorithm and we study its convergence by using suitable
probabilistic tools. Simulation results illustrate the good practical
performance of the proposed algorithm associated with a memory gradient
subspace, when applied to both non-adaptive and adaptive filter identification
problems
Sparsity-Aware Adaptive Algorithms Based on Alternating Optimization with Shrinkage
This letter proposes a novel sparsity-aware adaptive filtering scheme and
algorithms based on an alternating optimization strategy with shrinkage. The
proposed scheme employs a two-stage structure that consists of an alternating
optimization of a diagonally-structured matrix that speeds up the convergence
and an adaptive filter with a shrinkage function that forces the coefficients
with small magnitudes to zero. We devise alternating optimization least-mean
square (LMS) algorithms for the proposed scheme and analyze its mean-square
error. Simulations for a system identification application show that the
proposed scheme and algorithms outperform in convergence and tracking existing
sparsity-aware algorithms.Comment: 10 pages, 3 figures. IEEE Signal Processing Letters, 201
A Sparsity-Aware Adaptive Algorithm for Distributed Learning
In this paper, a sparsity-aware adaptive algorithm for distributed learning
in diffusion networks is developed. The algorithm follows the set-theoretic
estimation rationale. At each time instance and at each node of the network, a
closed convex set, known as property set, is constructed based on the received
measurements; this defines the region in which the solution is searched for. In
this paper, the property sets take the form of hyperslabs. The goal is to find
a point that belongs to the intersection of these hyperslabs. To this end,
sparsity encouraging variable metric projections onto the hyperslabs have been
adopted. Moreover, sparsity is also imposed by employing variable metric
projections onto weighted balls. A combine adapt cooperation strategy
is adopted. Under some mild assumptions, the scheme enjoys monotonicity,
asymptotic optimality and strong convergence to a point that lies in the
consensus subspace. Finally, numerical examples verify the validity of the
proposed scheme, compared to other algorithms, which have been developed in the
context of sparse adaptive learning
Diffusion Adaptation Strategies for Distributed Estimation over Gaussian Markov Random Fields
The aim of this paper is to propose diffusion strategies for distributed
estimation over adaptive networks, assuming the presence of spatially
correlated measurements distributed according to a Gaussian Markov random field
(GMRF) model. The proposed methods incorporate prior information about the
statistical dependency among observations, while at the same time processing
data in real-time and in a fully decentralized manner. A detailed mean-square
analysis is carried out in order to prove stability and evaluate the
steady-state performance of the proposed strategies. Finally, we also
illustrate how the proposed techniques can be easily extended in order to
incorporate thresholding operators for sparsity recovery applications.
Numerical results show the potential advantages of using such techniques for
distributed learning in adaptive networks deployed over GMRF.Comment: Submitted to IEEE Transactions on Signal Processing. arXiv admin
note: text overlap with arXiv:1206.309
- …