3,802 research outputs found
Feedback Acquisition and Reconstruction of Spectrum-Sparse Signals by Predictive Level Comparisons
In this letter, we propose a sparsity promoting feedback acquisition and
reconstruction scheme for sensing, encoding and subsequent reconstruction of
spectrally sparse signals. In the proposed scheme, the spectral components are
estimated utilizing a sparsity-promoting, sliding-window algorithm in a
feedback loop. Utilizing the estimated spectral components, a level signal is
predicted and sign measurements of the prediction error are acquired. The
sparsity promoting algorithm can then estimate the spectral components
iteratively from the sign measurements. Unlike many batch-based Compressive
Sensing (CS) algorithms, our proposed algorithm gradually estimates and follows
slow changes in the sparse components utilizing a sliding-window technique. We
also consider the scenario in which possible flipping errors in the sign bits
propagate along iterations (due to the feedback loop) during reconstruction. We
propose an iterative error correction algorithm to cope with this error
propagation phenomenon considering a binary-sparse occurrence model on the
error sequence. Simulation results show effective performance of the proposed
scheme in comparison with the literature
A Stochastic Majorize-Minimize Subspace Algorithm for Online Penalized Least Squares Estimation
Stochastic approximation techniques play an important role in solving many
problems encountered in machine learning or adaptive signal processing. In
these contexts, the statistics of the data are often unknown a priori or their
direct computation is too intensive, and they have thus to be estimated online
from the observed signals. For batch optimization of an objective function
being the sum of a data fidelity term and a penalization (e.g. a sparsity
promoting function), Majorize-Minimize (MM) methods have recently attracted
much interest since they are fast, highly flexible, and effective in ensuring
convergence. The goal of this paper is to show how these methods can be
successfully extended to the case when the data fidelity term corresponds to a
least squares criterion and the cost function is replaced by a sequence of
stochastic approximations of it. In this context, we propose an online version
of an MM subspace algorithm and we study its convergence by using suitable
probabilistic tools. Simulation results illustrate the good practical
performance of the proposed algorithm associated with a memory gradient
subspace, when applied to both non-adaptive and adaptive filter identification
problems
Multitask Diffusion Adaptation over Networks
Adaptive networks are suitable for decentralized inference tasks, e.g., to
monitor complex natural phenomena. Recent research works have intensively
studied distributed optimization problems in the case where the nodes have to
estimate a single optimum parameter vector collaboratively. However, there are
many important applications that are multitask-oriented in the sense that there
are multiple optimum parameter vectors to be inferred simultaneously, in a
collaborative manner, over the area covered by the network. In this paper, we
employ diffusion strategies to develop distributed algorithms that address
multitask problems by minimizing an appropriate mean-square error criterion
with -regularization. The stability and convergence of the algorithm in
the mean and in the mean-square sense is analyzed. Simulations are conducted to
verify the theoretical findings, and to illustrate how the distributed strategy
can be used in several useful applications related to spectral sensing, target
localization, and hyperspectral data unmixing.Comment: 29 pages, 11 figures, submitted for publicatio
- …