744 research outputs found
Convexity in source separation: Models, geometry, and algorithms
Source separation or demixing is the process of extracting multiple
components entangled within a signal. Contemporary signal processing presents a
host of difficult source separation problems, from interference cancellation to
background subtraction, blind deconvolution, and even dictionary learning.
Despite the recent progress in each of these applications, advances in
high-throughput sensor technology place demixing algorithms under pressure to
accommodate extremely high-dimensional signals, separate an ever larger number
of sources, and cope with more sophisticated signal and mixing models. These
difficulties are exacerbated by the need for real-time action in automated
decision-making systems.
Recent advances in convex optimization provide a simple framework for
efficiently solving numerous difficult demixing problems. This article provides
an overview of the emerging field, explains the theory that governs the
underlying procedures, and surveys algorithms that solve them efficiently. We
aim to equip practitioners with a toolkit for constructing their own demixing
algorithms that work, as well as concrete intuition for why they work
Rapid, Robust, and Reliable Blind Deconvolution via Nonconvex Optimization
We study the question of reconstructing two signals and from their
convolution . This problem, known as {\em blind deconvolution},
pervades many areas of science and technology, including astronomy, medical
imaging, optics, and wireless communications. A key challenge of this intricate
non-convex optimization problem is that it might exhibit many local minima. We
present an efficient numerical algorithm that is guaranteed to recover the
exact solution, when the number of measurements is (up to log-factors) slightly
larger than the information-theoretical minimum, and under reasonable
conditions on and . The proposed regularized gradient descent algorithm
converges at a geometric rate and is provably robust in the presence of noise.
To the best of our knowledge, our algorithm is the first blind deconvolution
algorithm that is numerically efficient, robust against noise, and comes with
rigorous recovery guarantees under certain subspace conditions. Moreover,
numerical experiments do not only provide empirical verification of our theory,
but they also demonstrate that our method yields excellent performance even in
situations beyond our theoretical framework
DRUG TARGET DECONVOLUTION IN CANCER CELL LINES
The deconvolution problem to identify the critical protein targets behind drug sensitivity
profiling is an important part of drug development. It helps us to understand
the mechanism of action of anti-cancer drugs on the cell lines through protein targets
in those cell lines. This problem can be formulated as a matrix deconvolution
problem, with two matrices for the cell-based drug sensitivity profiling and drug target
interaction data, respectively. The model needs to be solved to identify the
vulnerability of the cell lines to inhibition of critical targets.
We used drug sensitivity data for 265 anti-cancer compounds over 990 cell models
taken from cancer patients and cultivated in the lab. Using the data on interaction
of these drugs with the protein targets, we used a novel method called TDSBS
(target deconvolution with semi-blind source separation) in order to determine the
critical targets for each cell model. The critical protein targets determined using
this method were found to be clinically relevant, as we could determine that the
driver genes have higher TDSBS values compared to the non-driver genes in the cell
models. In this thesis we demonstrate a general statistical model which can be used
to identify the protein targets which are inhibited by anti-cancer drugs in drug/cell
line sensitivity experiments
- …