3 research outputs found
Unveiling Bias Compensation in Turbo-Based Algorithms for (Discrete) Compressed Sensing
In Compressed Sensing, a real-valued sparse vector has to be recovered from
an underdetermined system of linear equations. In many applications, however,
the elements of the sparse vector are drawn from a finite set. Adapted
algorithms incorporating this additional knowledge are required for the
discrete-valued setup. In this paper, turbo-based algorithms for both cases are
elucidated and analyzed from a communications engineering perspective, leading
to a deeper understanding of the algorithm. In particular, we gain the
intriguing insight that the calculation of extrinsic values is equal to the
unbiasing of a biased estimate and present an improved algorithm
Bias Compensation in Iterative Soft-Feedback Algorithms with Application to (Discrete) Compressed Sensing
In all applications in digital communications, it is crucial for an estimator
to be unbiased. Although so-called soft feedback is widely employed in many
different fields of engineering, typically the biased estimate is used. In this
paper, we contrast the fundamental unbiasing principles, which can be directly
applied whenever soft feedback is required. To this end, the problem is treated
from a signal-based perspective, as well as from the approach of estimating the
signal based on an estimate of the noise. Numerical results show that when
employed in iterative reconstruction algorithms for Compressed Sensing, a gain
of 1.2 dB due to proper unbiasing is possible
Low-Complexity Iterative Algorithms for (Discrete) Compressed Sensing
We consider iterative (`turbo') algorithms for compressed sensing. First, a
unified exposition of the different approaches available in the literature is
given, thereby enlightening the general principles and main differences. In
particular we discuss i) the estimation step (matched filter vs. optimum MMSE
estimator), ii) the unbiasing operation (implicitly or explicitly done and
equivalent to the calculation of extrinsic information), and iii) thresholding
vs. the calculation of soft values. Based on these insights we propose a
low-complexity but well-performing variant utilizing a Krylov space
approximation of the optimum linear MMSE estimator. The derivations are valid
for any probability density of the signal vector. However, numerical results
are shown for the discrete case. The novel algorithms shows very good
performance and even slightly faster convergence compared to approximative
message passing