58 research outputs found
Self-Calibration and Biconvex Compressive Sensing
The design of high-precision sensing devises becomes ever more difficult and
expensive. At the same time, the need for precise calibration of these devices
(ranging from tiny sensors to space telescopes) manifests itself as a major
roadblock in many scientific and technological endeavors. To achieve optimal
performance of advanced high-performance sensors one must carefully calibrate
them, which is often difficult or even impossible to do in practice. In this
work we bring together three seemingly unrelated concepts, namely
Self-Calibration, Compressive Sensing, and Biconvex Optimization. The idea
behind self-calibration is to equip a hardware device with a smart algorithm
that can compensate automatically for the lack of calibration. We show how
several self-calibration problems can be treated efficiently within the
framework of biconvex compressive sensing via a new method called SparseLift.
More specifically, we consider a linear system of equations y = DAx, where both
x and the diagonal matrix D (which models the calibration error) are unknown.
By "lifting" this biconvex inverse problem we arrive at a convex optimization
problem. By exploiting sparsity in the signal model, we derive explicit
theoretical guarantees under which both x and D can be recovered exactly,
robustly, and numerically efficiently via linear programming. Applications in
array calibration and wireless communications are discussed and numerical
simulations are presented, confirming and complementing our theoretical
analysis
Regularized Gradient Descent: A Nonconvex Recipe for Fast Joint Blind Deconvolution and Demixing
We study the question of extracting a sequence of functions
from observing only the sum of
their convolutions, i.e., from . While convex optimization techniques
are able to solve this joint blind deconvolution-demixing problem provably and
robustly under certain conditions, for medium-size or large-size problems we
need computationally faster methods without sacrificing the benefits of
mathematical rigor that come with convex methods. In this paper, we present a
non-convex algorithm which guarantees exact recovery under conditions that are
competitive with convex optimization methods, with the additional advantage of
being computationally much more efficient. Our two-step algorithm converges to
the global minimum linearly and is also robust in the presence of additive
noise. While the derived performance bounds are suboptimal in terms of the
information-theoretic limit, numerical simulations show remarkable performance
even if the number of measurements is close to the number of degrees of
freedom. We discuss an application of the proposed framework in wireless
communications in connection with the Internet-of-Things.Comment: Accepted to Information and Inference: a Journal of the IM
Local Geometry Determines Global Landscape in Low-rank Factorization for Synchronization
The orthogonal group synchronization problem, which focuses on recovering
orthogonal group elements from their corrupted pairwise measurements,
encompasses examples such as high-dimensional Kuramoto model on general signed
networks, -synchronization, community detection under stochastic
block models, and orthogonal Procrustes problem. The semidefinite relaxation
(SDR) has proven its power in solving this problem; however, its expensive
computational costs impede its widespread practical applications. We consider
the Burer-Monteiro factorization approach to the orthogonal group
synchronization, an effective and scalable low-rank factorization to solve
large scale SDPs. Despite the significant empirical successes of this
factorization approach, it is still a challenging task to understand when the
nonconvex optimization landscape is benign, i.e., the optimization landscape
possesses only one local minimizer, which is also global. In this work, we
demonstrate that if the degree of freedom within the factorization exceeds
twice the condition number of the ``Laplacian" (certificate matrix) at the
global minimizer, the optimization landscape is absent of spurious local
minima. Our main theorem is purely algebraic and versatile, and it seamlessly
applies to all the aforementioned examples: the nonconvex landscape remains
benign under almost identical condition that enables the success of the SDR.
Additionally, we illustrate that the Burer-Monteiro factorization is robust to
``monotone adversaries", mirroring the resilience of the SDR. In other words,
introducing ``favorable" adversaries into the data will not result in the
emergence of new spurious local minimizers
Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization
We study the question of reconstructing two signals and from their
convolution . This problem, known as {\em blind deconvolution},
pervades many areas of science and technology, including astronomy, medical
imaging, optics, and wireless communications. A key challenge of this intricate
non-convex optimization problem is that it might exhibit many local minima. We
present an efficient numerical algorithm that is guaranteed to recover the
exact solution, when the number of measurements is (up to log-factors) slightly
larger than the information-theoretical minimum, and under reasonable
conditions on and . The proposed regularized gradient descent algorithm
converges at a geometric rate and is provably robust in the presence of noise.
To the best of our knowledge, our algorithm is the first blind deconvolution
algorithm that is numerically efficient, robust against noise, and comes with
rigorous recovery guarantees under certain subspace conditions. Moreover,
numerical experiments do not only provide empirical verification of our theory,
but they also demonstrate that our method yields excellent performance even in
situations beyond our theoretical framework
- β¦