75 research outputs found
Harnessing the Power of Sample Abundance: Theoretical Guarantees and Algorithms for Accelerated One-Bit Sensing
One-bit quantization with time-varying sampling thresholds (also known as
random dithering) has recently found significant utilization potential in
statistical signal processing applications due to its relatively low power
consumption and low implementation cost. In addition to such advantages, an
attractive feature of one-bit analog-to-digital converters (ADCs) is their
superior sampling rates as compared to their conventional multi-bit
counterparts. This characteristic endows one-bit signal processing frameworks
with what one may refer to as sample abundance. We show that sample abundance
plays a pivotal role in many signal recovery and optimization problems that are
formulated as (possibly non-convex) quadratic programs with linear feasibility
constraints. Of particular interest to our work are low-rank matrix recovery
and compressed sensing applications that take advantage of one-bit
quantization. We demonstrate that the sample abundance paradigm allows for the
transformation of such problems to merely linear feasibility problems by
forming large-scale overdetermined linear systems -- thus removing the need for
handling costly optimization constraints and objectives. To make the proposed
computational cost savings achievable, we offer enhanced randomized Kaczmarz
algorithms to solve these highly overdetermined feasibility problems and
provide theoretical guarantees in terms of their convergence, sample size
requirements, and overall performance. Several numerical results are presented
to illustrate the effectiveness of the proposed methodologies.Comment: arXiv admin note: text overlap with arXiv:2301.0346
An Asynchronous Parallel Approach to Sparse Recovery
Asynchronous parallel computing and sparse recovery are two areas that have
received recent interest. Asynchronous algorithms are often studied to solve
optimization problems where the cost function takes the form , with a common assumption that each is sparse; that is, each
acts only on a small number of components of . Sparse
recovery problems, such as compressed sensing, can be formulated as
optimization problems, however, the cost functions are dense with respect
to the components of , and instead the signal is assumed to be sparse,
meaning that it has only non-zeros where . Here we address how one
may use an asynchronous parallel architecture when the cost functions are
not sparse in , but rather the signal is sparse. We propose an
asynchronous parallel approach to sparse recovery via a stochastic greedy
algorithm, where multiple processors asynchronously update a vector in shared
memory containing information on the estimated signal support. We include
numerical simulations that illustrate the potential benefits of our proposed
asynchronous method.Comment: 5 pages, 2 figure
Topics in Compressed Sensing
Compressed sensing has a wide range of applications that include error correction, imaging, radar and many more. Given a sparse signal in a high dimensional space, one wishes to reconstruct that signal accurately and efficiently from a number of linear measurements much less than its actual dimension. Although in theory it is clear that this is possible, the difficulty lies in the construction of algorithms that perform the recovery efficiently, as well as determining which kind of linear measurements allow for the reconstruction. There have been two distinct major approaches to sparse recovery that each present different benefits and shortcomings. The first, L1-minimization methods such as Basis Pursuit, use a linear optimization problem to recover the signal. This method provides strong guarantees and stability, but relies on Linear Programming, whose methods do not yet have strong polynomially bounded runtimes. The second approach uses greedy methods that compute the support of the signal iteratively. These methods are usually much faster than Basis Pursuit, but until recently had not been able to provide the same guarantees. This gap between the two approaches was bridged when we developed and analyzed the greedy algorithm Regularized Orthogonal Matching Pursuit (ROMP). ROMP provides similar guarantees to Basis Pursuit as well as the speed of a greedy algorithm. Our more recent algorithm Compressive Sampling Matching Pursuit (CoSaMP) improves upon these guarantees, and is optimal in every important aspect
Fast stochastic dual coordinate descent algorithms for linearly constrained convex optimization
The problem of finding a solution to the linear system with certain
minimization properties arises in numerous scientific and engineering areas. In
the era of big data, the stochastic optimization algorithms become increasingly
significant due to their scalability for problems of unprecedented size. This
paper focuses on the problem of minimizing a strongly convex function subject
to linear constraints. We consider the dual formulation of this problem and
adopt the stochastic coordinate descent to solve it. The proposed algorithmic
framework, called fast stochastic dual coordinate descent, utilizes sampling
matrices sampled from user-defined distributions to extract gradient
information. Moreover, it employs Polyak's heavy ball momentum acceleration
with adaptive parameters learned through iterations, overcoming the limitation
of the heavy ball momentum method that it requires prior knowledge of certain
parameters, such as the singular values of a matrix. With these extensions, the
framework is able to recover many well-known methods in the context, including
the randomized sparse Kaczmarz method, the randomized regularized Kaczmarz
method, the linearized Bregman iteration, and a variant of the conjugate
gradient (CG) method. We prove that, with strongly admissible objective
function, the proposed method converges linearly in expectation. Numerical
experiments are provided to confirm our results.Comment: arXiv admin note: text overlap with arXiv:2305.0548
- β¦