17,636 research outputs found
Multiple Illumination Phaseless Super-Resolution (MIPS) with Applications To Phaseless DOA Estimation and Diffraction Imaging
Phaseless super-resolution is the problem of recovering an unknown signal
from measurements of the magnitudes of the low frequency Fourier transform of
the signal. This problem arises in applications where measuring the phase, and
making high-frequency measurements, are either too costly or altogether
infeasible. The problem is especially challenging because it combines the
difficult problems of phase retrieval and classical super-resolutionComment: To appear in ICASSP 201
Signal Processing in Large Systems: a New Paradigm
For a long time, detection and parameter estimation methods for signal
processing have relied on asymptotic statistics as the number of
observations of a population grows large comparatively to the population size
, i.e. . Modern technological and societal advances now
demand the study of sometimes extremely large populations and simultaneously
require fast signal processing due to accelerated system dynamics. This results
in not-so-large practical ratios , sometimes even smaller than one. A
disruptive change in classical signal processing methods has therefore been
initiated in the past ten years, mostly spurred by the field of large
dimensional random matrix theory. The early works in random matrix theory for
signal processing applications are however scarce and highly technical. This
tutorial provides an accessible methodological introduction to the modern tools
of random matrix theory and to the signal processing methods derived from them,
with an emphasis on simple illustrative examples
Convexity in source separation: Models, geometry, and algorithms
Source separation or demixing is the process of extracting multiple
components entangled within a signal. Contemporary signal processing presents a
host of difficult source separation problems, from interference cancellation to
background subtraction, blind deconvolution, and even dictionary learning.
Despite the recent progress in each of these applications, advances in
high-throughput sensor technology place demixing algorithms under pressure to
accommodate extremely high-dimensional signals, separate an ever larger number
of sources, and cope with more sophisticated signal and mixing models. These
difficulties are exacerbated by the need for real-time action in automated
decision-making systems.
Recent advances in convex optimization provide a simple framework for
efficiently solving numerous difficult demixing problems. This article provides
an overview of the emerging field, explains the theory that governs the
underlying procedures, and surveys algorithms that solve them efficiently. We
aim to equip practitioners with a toolkit for constructing their own demixing
algorithms that work, as well as concrete intuition for why they work
- âŠ