430 research outputs found
Convexity in source separation: Models, geometry, and algorithms
Source separation or demixing is the process of extracting multiple
components entangled within a signal. Contemporary signal processing presents a
host of difficult source separation problems, from interference cancellation to
background subtraction, blind deconvolution, and even dictionary learning.
Despite the recent progress in each of these applications, advances in
high-throughput sensor technology place demixing algorithms under pressure to
accommodate extremely high-dimensional signals, separate an ever larger number
of sources, and cope with more sophisticated signal and mixing models. These
difficulties are exacerbated by the need for real-time action in automated
decision-making systems.
Recent advances in convex optimization provide a simple framework for
efficiently solving numerous difficult demixing problems. This article provides
an overview of the emerging field, explains the theory that governs the
underlying procedures, and surveys algorithms that solve them efficiently. We
aim to equip practitioners with a toolkit for constructing their own demixing
algorithms that work, as well as concrete intuition for why they work
Stable Recovery Of Sparse Vectors From Random Sinusoidal Feature Maps
Random sinusoidal features are a popular approach for speeding up
kernel-based inference in large datasets. Prior to the inference stage, the
approach suggests performing dimensionality reduction by first multiplying each
data vector by a random Gaussian matrix, and then computing an element-wise
sinusoid. Theoretical analysis shows that collecting a sufficient number of
such features can be reliably used for subsequent inference in kernel
classification and regression.
In this work, we demonstrate that with a mild increase in the dimension of
the embedding, it is also possible to reconstruct the data vector from such
random sinusoidal features, provided that the underlying data is sparse enough.
In particular, we propose a numerically stable algorithm for reconstructing the
data vector given the nonlinear features, and analyze its sample complexity.
Our algorithm can be extended to other types of structured inverse problems,
such as demixing a pair of sparse (but incoherent) vectors. We support the
efficacy of our approach via numerical experiments
Sparse Signal Processing Concepts for Efficient 5G System Design
As it becomes increasingly apparent that 4G will not be able to meet the
emerging demands of future mobile communication systems, the question what
could make up a 5G system, what are the crucial challenges and what are the key
drivers is part of intensive, ongoing discussions. Partly due to the advent of
compressive sensing, methods that can optimally exploit sparsity in signals
have received tremendous attention in recent years. In this paper we will
describe a variety of scenarios in which signal sparsity arises naturally in 5G
wireless systems. Signal sparsity and the associated rich collection of tools
and algorithms will thus be a viable source for innovation in 5G wireless
system design. We will discribe applications of this sparse signal processing
paradigm in MIMO random access, cloud radio access networks, compressive
channel-source network coding, and embedded security. We will also emphasize
important open problem that may arise in 5G system design, for which sparsity
will potentially play a key role in their solution.Comment: 18 pages, 5 figures, accepted for publication in IEEE Acces
Painless Breakups -- Efficient Demixing of Low Rank Matrices
Assume we are given a sum of linear measurements of different rank-
matrices of the form . When and under
which conditions is it possible to extract (demix) the individual matrices
from the single measurement vector ? And can we do the demixing
numerically efficiently? We present two computationally efficient algorithms
based on hard thresholding to solve this low rank demixing problem. We prove
that under suitable conditions these algorithms are guaranteed to converge to
the correct solution at a linear rate. We discuss applications in connection
with quantum tomography and the Internet-of-Things. Numerical simulations
demonstrate empirically the performance of the proposed algorithms
Regularized Gradient Descent: A Nonconvex Recipe for Fast Joint Blind Deconvolution and Demixing
We study the question of extracting a sequence of functions
from observing only the sum of
their convolutions, i.e., from . While convex optimization techniques
are able to solve this joint blind deconvolution-demixing problem provably and
robustly under certain conditions, for medium-size or large-size problems we
need computationally faster methods without sacrificing the benefits of
mathematical rigor that come with convex methods. In this paper, we present a
non-convex algorithm which guarantees exact recovery under conditions that are
competitive with convex optimization methods, with the additional advantage of
being computationally much more efficient. Our two-step algorithm converges to
the global minimum linearly and is also robust in the presence of additive
noise. While the derived performance bounds are suboptimal in terms of the
information-theoretic limit, numerical simulations show remarkable performance
even if the number of measurements is close to the number of degrees of
freedom. We discuss an application of the proposed framework in wireless
communications in connection with the Internet-of-Things.Comment: Accepted to Information and Inference: a Journal of the IM
The achievable performance of convex demixing
Demixing is the problem of identifying multiple structured signals from a
superimposed, undersampled, and noisy observation. This work analyzes a general
framework, based on convex optimization, for solving demixing problems. When
the constituent signals follow a generic incoherence model, this analysis leads
to precise recovery guarantees. These results admit an attractive
interpretation: each signal possesses an intrinsic degrees-of-freedom
parameter, and demixing can succeed if and only if the dimension of the
observation exceeds the total degrees of freedom present in the observation
Sharp recovery bounds for convex demixing, with applications
Demixing refers to the challenge of identifying two structured signals given
only the sum of the two signals and prior information about their structures.
Examples include the problem of separating a signal that is sparse with respect
to one basis from a signal that is sparse with respect to a second basis, and
the problem of decomposing an observed matrix into a low-rank matrix plus a
sparse matrix. This paper describes and analyzes a framework, based on convex
optimization, for solving these demixing problems, and many others. This work
introduces a randomized signal model which ensures that the two structures are
incoherent, i.e., generically oriented. For an observation from this model,
this approach identifies a summary statistic that reflects the complexity of a
particular signal. The difficulty of separating two structured, incoherent
signals depends only on the total complexity of the two structures. Some
applications include (i) demixing two signals that are sparse in mutually
incoherent bases; (ii) decoding spread-spectrum transmissions in the presence
of impulsive errors; and (iii) removing sparse corruptions from a low-rank
matrix. In each case, the theoretical analysis of the convex demixing method
closely matches its empirical behavior.Comment: 51 pages, 13 figures, 2 tables. This version accepted to J. Found.
Comput. Mat
- …