8,563 research outputs found
Regularized Gradient Descent: A Nonconvex Recipe for Fast Joint Blind Deconvolution and Demixing
We study the question of extracting a sequence of functions
from observing only the sum of
their convolutions, i.e., from . While convex optimization techniques
are able to solve this joint blind deconvolution-demixing problem provably and
robustly under certain conditions, for medium-size or large-size problems we
need computationally faster methods without sacrificing the benefits of
mathematical rigor that come with convex methods. In this paper, we present a
non-convex algorithm which guarantees exact recovery under conditions that are
competitive with convex optimization methods, with the additional advantage of
being computationally much more efficient. Our two-step algorithm converges to
the global minimum linearly and is also robust in the presence of additive
noise. While the derived performance bounds are suboptimal in terms of the
information-theoretic limit, numerical simulations show remarkable performance
even if the number of measurements is close to the number of degrees of
freedom. We discuss an application of the proposed framework in wireless
communications in connection with the Internet-of-Things.Comment: Accepted to Information and Inference: a Journal of the IM
Self-Calibration and Biconvex Compressive Sensing
The design of high-precision sensing devises becomes ever more difficult and
expensive. At the same time, the need for precise calibration of these devices
(ranging from tiny sensors to space telescopes) manifests itself as a major
roadblock in many scientific and technological endeavors. To achieve optimal
performance of advanced high-performance sensors one must carefully calibrate
them, which is often difficult or even impossible to do in practice. In this
work we bring together three seemingly unrelated concepts, namely
Self-Calibration, Compressive Sensing, and Biconvex Optimization. The idea
behind self-calibration is to equip a hardware device with a smart algorithm
that can compensate automatically for the lack of calibration. We show how
several self-calibration problems can be treated efficiently within the
framework of biconvex compressive sensing via a new method called SparseLift.
More specifically, we consider a linear system of equations y = DAx, where both
x and the diagonal matrix D (which models the calibration error) are unknown.
By "lifting" this biconvex inverse problem we arrive at a convex optimization
problem. By exploiting sparsity in the signal model, we derive explicit
theoretical guarantees under which both x and D can be recovered exactly,
robustly, and numerically efficiently via linear programming. Applications in
array calibration and wireless communications are discussed and numerical
simulations are presented, confirming and complementing our theoretical
analysis
Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization
We study the question of reconstructing two signals and from their
convolution . This problem, known as {\em blind deconvolution},
pervades many areas of science and technology, including astronomy, medical
imaging, optics, and wireless communications. A key challenge of this intricate
non-convex optimization problem is that it might exhibit many local minima. We
present an efficient numerical algorithm that is guaranteed to recover the
exact solution, when the number of measurements is (up to log-factors) slightly
larger than the information-theoretical minimum, and under reasonable
conditions on and . The proposed regularized gradient descent algorithm
converges at a geometric rate and is provably robust in the presence of noise.
To the best of our knowledge, our algorithm is the first blind deconvolution
algorithm that is numerically efficient, robust against noise, and comes with
rigorous recovery guarantees under certain subspace conditions. Moreover,
numerical experiments do not only provide empirical verification of our theory,
but they also demonstrate that our method yields excellent performance even in
situations beyond our theoretical framework
Confined Multilamellae Prefer Cylindrical Morphology
By evaporating a drop of lipid dispersion we generate the myelin morphology
often seen in dissolving surfactant powders. We explain these puzzling
nonequilibrium structures using a geometric argument: The bilayer repeat
spacing increases and thus the repulsion between bilayers decreases when a
multilamellar disk is converted into a myelin without gain or loss of material
and with number of bilayers unchanged. Sufficient reduction in bilayer
repulsion can compensate for the cost in curvature energy, leading to a net
stability of the myelin structure. A numerical estimate predicts the degree of
dehydration required to favor myelin structures over flat lamellae.Comment: 6 pages, 3 figures, submitted to Euro. Phys. J.
- β¦