565 research outputs found
FRI Sampling With Arbitrary Kernels
This paper addresses the problem of sampling non-bandlimited signals within the Finite Rate of Innovation (FRI) setting. We had previously shown that, by using sampling kernels whose integer span contains specific exponentials (generalized Strang-Fix conditions), it is possible to devise non-iterative, fast reconstruction algorithms from very low-rate samples. Yet, the accuracy and sensitivity to noise of these algorithms is highly dependent on these exponential reproducing kernels â actually, on the exponentials that they reproduce. Hence, our first contribution here is to provide clear guidelines on how to choose the sampling kernels optimally, in such a way that the reconstruction quality is maximized in the presence of noise. The optimality of these kernels is validated by comparing with CrameÌr-Raoâs lower bounds (CRB). Our second contribution is to relax the exact exponential reproduction requirement. Instead, we demonstrate that arbitrary sampling kernels can reproduce the âbest â exponentials within quite a high accuracy in general, and that applying the exact FRI algorithms in this approximate context results in near-optimal reconstruction accuracy for practical noise levels. Essentially, we propose a universal extension of the FRI approach to arbitrary sampling kernels. Numerical results checked against the CRB validate the various contributions of the paper and in particular outline the ability of arbitrary sampling kernels to be used in FRI algorithms
Exact and approximate Strang-Fix conditions to reconstruct signals with finite rate of innovation from samples taken with arbitrary kernels
In the last few years, several new methods have been developed for the sampling and
exact reconstruction of specific classes of non-bandlimited signals known as signals with finite rate of innovation (FRI). This is achieved by using adequate sampling kernels and
reconstruction schemes. An example of valid kernels, which we use throughout the thesis,
is given by the family of exponential reproducing functions. These satisfy the generalised
Strang-Fix conditions, which ensure that proper linear combinations of the kernel with its
shifted versions reproduce polynomials or exponentials exactly.
The first contribution of the thesis is to analyse the behaviour of these kernels in the
case of noisy measurements in order to provide clear guidelines on how to choose the exponential
reproducing kernel that leads to the most stable reconstruction when estimating
FRI signals from noisy samples. We then depart from the situation in which we can choose
the sampling kernel and develop a new strategy that is universal in that it works with any
kernel. We do so by noting that meeting the exact exponential reproduction condition is
too stringent a constraint. We thus allow for a controlled error in the reproduction formula
in order to use the exponential reproduction idea with arbitrary kernels and develop
a universal reconstruction method which is stable and robust to noise.
Numerical results validate the various contributions of the thesis and in particular show
that the approximate exponential reproduction strategy leads to more stable and accurate
reconstruction results than those obtained when using the exact recovery methods.Open Acces
Sampling and Super-resolution of Sparse Signals Beyond the Fourier Domain
Recovering a sparse signal from its low-pass projections in the Fourier
domain is a problem of broad interest in science and engineering and is
commonly referred to as super-resolution. In many cases, however, Fourier
domain may not be the natural choice. For example, in holography, low-pass
projections of sparse signals are obtained in the Fresnel domain. Similarly,
time-varying system identification relies on low-pass projections on the space
of linear frequency modulated signals. In this paper, we study the recovery of
sparse signals from low-pass projections in the Special Affine Fourier
Transform domain (SAFT). The SAFT parametrically generalizes a number of well
known unitary transformations that are used in signal processing and optics. In
analogy to the Shannon's sampling framework, we specify sampling theorems for
recovery of sparse signals considering three specific cases: (1) sampling with
arbitrary, bandlimited kernels, (2) sampling with smooth, time-limited kernels
and, (3) recovery from Gabor transform measurements linked with the SAFT
domain. Our work offers a unifying perspective on the sparse sampling problem
which is compatible with the Fourier, Fresnel and Fractional Fourier domain
based results. In deriving our results, we introduce the SAFT series (analogous
to the Fourier series) and the short time SAFT, and study convolution theorems
that establish a convolution--multiplication property in the SAFT domain.Comment: 42 pages, 3 figures, manuscript under revie
Shapes From Pixels
Continuous-domain visual signals are usually captured as discrete (digital)
images. This operation is not invertible in general, in the sense that the
continuous-domain signal cannot be exactly reconstructed based on the discrete
image, unless it satisfies certain constraints (\emph{e.g.}, bandlimitedness).
In this paper, we study the problem of recovering shape images with smooth
boundaries from a set of samples. Thus, the reconstructed image is constrained
to regenerate the same samples (consistency), as well as forming a shape
(bilevel) image. We initially formulate the reconstruction technique by
minimizing the shape perimeter over the set of consistent binary shapes. Next,
we relax the non-convex shape constraint to transform the problem into
minimizing the total variation over consistent non-negative-valued images. We
also introduce a requirement (called reducibility) that guarantees equivalence
between the two problems. We illustrate that the reducibility property
effectively sets a requirement on the minimum sampling density. One can draw
analogy between the reducibility property and the so-called restricted isometry
property (RIP) in compressed sensing which establishes the equivalence of the
minimization with the relaxed minimization. We also evaluate
the performance of the relaxed alternative in various numerical experiments.Comment: 13 pages, 14 figure
Innovation Rate Sampling of Pulse Streams with Application to Ultrasound Imaging
Signals comprised of a stream of short pulses appear in many applications
including bio-imaging and radar. The recent finite rate of innovation
framework, has paved the way to low rate sampling of such pulses by noticing
that only a small number of parameters per unit time are needed to fully
describe these signals. Unfortunately, for high rates of innovation, existing
sampling schemes are numerically unstable. In this paper we propose a general
sampling approach which leads to stable recovery even in the presence of many
pulses. We begin by deriving a condition on the sampling kernel which allows
perfect reconstruction of periodic streams from the minimal number of samples.
We then design a compactly supported class of filters, satisfying this
condition. The periodic solution is extended to finite and infinite streams,
and is shown to be numerically stable even for a large number of pulses. High
noise robustness is also demonstrated when the delays are sufficiently
separated. Finally, we process ultrasound imaging data using our techniques,
and show that substantial rate reduction with respect to traditional ultrasound
sampling schemes can be achieved.Comment: 14 pages, 13 figure
Sampling and Reconstruction of Shapes with Algebraic Boundaries
We present a sampling theory for a class of binary images with finite rate of
innovation (FRI). Every image in our model is the restriction of
\mathds{1}_{\{p\leq0\}} to the image plane, where \mathds{1} denotes the
indicator function and is some real bivariate polynomial. This particularly
means that the boundaries in the image form a subset of an algebraic curve with
the implicit polynomial . We show that the image parameters --i.e., the
polynomial coefficients-- satisfy a set of linear annihilation equations with
the coefficients being the image moments. The inherent sensitivity of the
moments to noise makes the reconstruction process numerically unstable and
narrows the choice of the sampling kernels to polynomial reproducing kernels.
As a remedy to these problems, we replace conventional moments with more stable
\emph{generalized moments} that are adjusted to the given sampling kernel. The
benefits are threefold: (1) it relaxes the requirements on the sampling
kernels, (2) produces annihilation equations that are robust at numerical
precision, and (3) extends the results to images with unbounded boundaries. We
further reduce the sensitivity of the reconstruction process to noise by taking
into account the sign of the polynomial at certain points, and sequentially
enforcing measurement consistency. We consider various numerical experiments to
demonstrate the performance of our algorithm in reconstructing binary images,
including low to moderate noise levels and a range of realistic sampling
kernels.Comment: 12 pages, 14 figure
Sub-Nyquist Sampling: Bridging Theory and Practice
Sampling theory encompasses all aspects related to the conversion of
continuous-time signals to discrete streams of numbers. The famous
Shannon-Nyquist theorem has become a landmark in the development of digital
signal processing. In modern applications, an increasingly number of functions
is being pushed forward to sophisticated software algorithms, leaving only
those delicate finely-tuned tasks for the circuit level.
In this paper, we review sampling strategies which target reduction of the
ADC rate below Nyquist. Our survey covers classic works from the early 50's of
the previous century through recent publications from the past several years.
The prime focus is bridging theory and practice, that is to pinpoint the
potential of sub-Nyquist strategies to emerge from the math to the hardware. In
that spirit, we integrate contemporary theoretical viewpoints, which study
signal modeling in a union of subspaces, together with a taste of practical
aspects, namely how the avant-garde modalities boil down to concrete signal
processing systems. Our hope is that this presentation style will attract the
interest of both researchers and engineers in the hope of promoting the
sub-Nyquist premise into practical applications, and encouraging further
research into this exciting new frontier.Comment: 48 pages, 18 figures, to appear in IEEE Signal Processing Magazin
Time Delay Estimation from Low Rate Samples: A Union of Subspaces Approach
Time delay estimation arises in many applications in which a multipath medium
has to be identified from pulses transmitted through the channel. Various
approaches have been proposed in the literature to identify time delays
introduced by multipath environments. However, these methods either operate on
the analog received signal, or require high sampling rates in order to achieve
reasonable time resolution. In this paper, our goal is to develop a unified
approach to time delay estimation from low rate samples of the output of a
multipath channel. Our methods result in perfect recovery of the multipath
delays from samples of the channel output at the lowest possible rate, even in
the presence of overlapping transmitted pulses. This rate depends only on the
number of multipath components and the transmission rate, but not on the
bandwidth of the probing signal. In addition, our development allows for a
variety of different sampling methods. By properly manipulating the low-rate
samples, we show that the time delays can be recovered using the well-known
ESPRIT algorithm. Combining results from sampling theory with those obtained in
the context of direction of arrival estimation methods, we develop necessary
and sufficient conditions on the transmitted pulse and the sampling functions
in order to ensure perfect recovery of the channel parameters at the minimal
possible rate. Our results can be viewed in a broader context, as a sampling
theorem for analog signals defined over an infinite union of subspaces
- âŠ