1,033 research outputs found
Regularized Gradient Descent: A Nonconvex Recipe for Fast Joint Blind Deconvolution and Demixing
We study the question of extracting a sequence of functions
from observing only the sum of
their convolutions, i.e., from . While convex optimization techniques
are able to solve this joint blind deconvolution-demixing problem provably and
robustly under certain conditions, for medium-size or large-size problems we
need computationally faster methods without sacrificing the benefits of
mathematical rigor that come with convex methods. In this paper, we present a
non-convex algorithm which guarantees exact recovery under conditions that are
competitive with convex optimization methods, with the additional advantage of
being computationally much more efficient. Our two-step algorithm converges to
the global minimum linearly and is also robust in the presence of additive
noise. While the derived performance bounds are suboptimal in terms of the
information-theoretic limit, numerical simulations show remarkable performance
even if the number of measurements is close to the number of degrees of
freedom. We discuss an application of the proposed framework in wireless
communications in connection with the Internet-of-Things.Comment: Accepted to Information and Inference: a Journal of the IM
Recovering convex boundaries from blurred and noisy observations
We consider the problem of estimating convex boundaries from blurred and
noisy observations. In our model, the convolution of an intensity function
is observed with additive Gaussian white noise. The function is assumed to
have convex support whose boundary is to be recovered. Rather than directly
estimating the intensity function, we develop a procedure which is based on
estimating the support function of the set . This approach is closely
related to the method of geometric hyperplane probing, a well-known technique
in computer vision applications. We establish bounds that reveal how the
estimation accuracy depends on the ill-posedness of the convolution operator
and the behavior of the intensity function near the boundary.Comment: Published at http://dx.doi.org/10.1214/009053606000000326 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
On deconvolution of distribution functions
The subject of this paper is the problem of nonparametric estimation of a
continuous distribution function from observations with measurement errors. We
study minimax complexity of this problem when unknown distribution has a
density belonging to the Sobolev class, and the error density is ordinary
smooth. We develop rate optimal estimators based on direct inversion of
empirical characteristic function. We also derive minimax affine estimators of
the distribution function which are given by an explicit convex optimization
problem. Adaptive versions of these estimators are proposed, and some numerical
results demonstrating good practical behavior of the developed procedures are
presented.Comment: Published in at http://dx.doi.org/10.1214/11-AOS907 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- …