2,259 research outputs found
Phase Retrieval From Binary Measurements
We consider the problem of signal reconstruction from quadratic measurements
that are encoded as +1 or -1 depending on whether they exceed a predetermined
positive threshold or not. Binary measurements are fast to acquire and
inexpensive in terms of hardware. We formulate the problem of signal
reconstruction using a consistency criterion, wherein one seeks to find a
signal that is in agreement with the measurements. To enforce consistency, we
construct a convex cost using a one-sided quadratic penalty and minimize it
using an iterative accelerated projected gradient-descent (APGD) technique. The
PGD scheme reduces the cost function in each iteration, whereas incorporating
momentum into PGD, notwithstanding the lack of such a descent property,
exhibits faster convergence than PGD empirically. We refer to the resulting
algorithm as binary phase retrieval (BPR). Considering additive white noise
contamination prior to quantization, we also derive the Cramer-Rao Bound (CRB)
for the binary encoding model. Experimental results demonstrate that the BPR
algorithm yields a signal-to- reconstruction error ratio (SRER) of
approximately 25 dB in the absence of noise. In the presence of noise prior to
quantization, the SRER is within 2 to 3 dB of the CRB
Feedback Acquisition and Reconstruction of Spectrum-Sparse Signals by Predictive Level Comparisons
In this letter, we propose a sparsity promoting feedback acquisition and
reconstruction scheme for sensing, encoding and subsequent reconstruction of
spectrally sparse signals. In the proposed scheme, the spectral components are
estimated utilizing a sparsity-promoting, sliding-window algorithm in a
feedback loop. Utilizing the estimated spectral components, a level signal is
predicted and sign measurements of the prediction error are acquired. The
sparsity promoting algorithm can then estimate the spectral components
iteratively from the sign measurements. Unlike many batch-based Compressive
Sensing (CS) algorithms, our proposed algorithm gradually estimates and follows
slow changes in the sparse components utilizing a sliding-window technique. We
also consider the scenario in which possible flipping errors in the sign bits
propagate along iterations (due to the feedback loop) during reconstruction. We
propose an iterative error correction algorithm to cope with this error
propagation phenomenon considering a binary-sparse occurrence model on the
error sequence. Simulation results show effective performance of the proposed
scheme in comparison with the literature
Sharp Time--Data Tradeoffs for Linear Inverse Problems
In this paper we characterize sharp time-data tradeoffs for optimization
problems used for solving linear inverse problems. We focus on the minimization
of a least-squares objective subject to a constraint defined as the sub-level
set of a penalty function. We present a unified convergence analysis of the
gradient projection algorithm applied to such problems. We sharply characterize
the convergence rate associated with a wide variety of random measurement
ensembles in terms of the number of measurements and structural complexity of
the signal with respect to the chosen penalty function. The results apply to
both convex and nonconvex constraints, demonstrating that a linear convergence
rate is attainable even though the least squares objective is not strongly
convex in these settings. When specialized to Gaussian measurements our results
show that such linear convergence occurs when the number of measurements is
merely 4 times the minimal number required to recover the desired signal at all
(a.k.a. the phase transition). We also achieve a slower but geometric rate of
convergence precisely above the phase transition point. Extensive numerical
results suggest that the derived rates exactly match the empirical performance
Structured Sparsity: Discrete and Convex approaches
Compressive sensing (CS) exploits sparsity to recover sparse or compressible
signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity
is also used to enhance interpretability in machine learning and statistics
applications: While the ambient dimension is vast in modern data analysis
problems, the relevant information therein typically resides in a much lower
dimensional space. However, many solutions proposed nowadays do not leverage
the true underlying structure. Recent results in CS extend the simple sparsity
idea to more sophisticated {\em structured} sparsity models, which describe the
interdependency between the nonzero components of a signal, allowing to
increase the interpretability of the results and lead to better recovery
performance. In order to better understand the impact of structured sparsity,
in this chapter we analyze the connections between the discrete models and
their convex relaxations, highlighting their relative advantages. We start with
the general group sparse model and then elaborate on two important special
cases: the dispersive and the hierarchical models. For each, we present the
models in their discrete nature, discuss how to solve the ensuing discrete
problems and then describe convex relaxations. We also consider more general
structures as defined by set functions and present their convex proxies.
Further, we discuss efficient optimization solutions for structured sparsity
problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure
Proof of Convergence and Performance Analysis for Sparse Recovery via Zero-point Attracting Projection
A recursive algorithm named Zero-point Attracting Projection (ZAP) is
proposed recently for sparse signal reconstruction. Compared with the reference
algorithms, ZAP demonstrates rather good performance in recovery precision and
robustness. However, any theoretical analysis about the mentioned algorithm,
even a proof on its convergence, is not available. In this work, a strict proof
on the convergence of ZAP is provided and the condition of convergence is put
forward. Based on the theoretical analysis, it is further proved that ZAP is
non-biased and can approach the sparse solution to any extent, with the proper
choice of step-size. Furthermore, the case of inaccurate measurements in noisy
scenario is also discussed. It is proved that disturbance power linearly
reduces the recovery precision, which is predictable but not preventable. The
reconstruction deviation of -compressible signal is also provided. Finally,
numerical simulations are performed to verify the theoretical analysis.Comment: 29 pages, 6 figure
- …