7,069 research outputs found
Sampling and Reconstruction of Shapes with Algebraic Boundaries
We present a sampling theory for a class of binary images with finite rate of
innovation (FRI). Every image in our model is the restriction of
\mathds{1}_{\{p\leq0\}} to the image plane, where \mathds{1} denotes the
indicator function and is some real bivariate polynomial. This particularly
means that the boundaries in the image form a subset of an algebraic curve with
the implicit polynomial . We show that the image parameters --i.e., the
polynomial coefficients-- satisfy a set of linear annihilation equations with
the coefficients being the image moments. The inherent sensitivity of the
moments to noise makes the reconstruction process numerically unstable and
narrows the choice of the sampling kernels to polynomial reproducing kernels.
As a remedy to these problems, we replace conventional moments with more stable
\emph{generalized moments} that are adjusted to the given sampling kernel. The
benefits are threefold: (1) it relaxes the requirements on the sampling
kernels, (2) produces annihilation equations that are robust at numerical
precision, and (3) extends the results to images with unbounded boundaries. We
further reduce the sensitivity of the reconstruction process to noise by taking
into account the sign of the polynomial at certain points, and sequentially
enforcing measurement consistency. We consider various numerical experiments to
demonstrate the performance of our algorithm in reconstructing binary images,
including low to moderate noise levels and a range of realistic sampling
kernels.Comment: 12 pages, 14 figure
An Introduction To Compressive Sampling [A sensing/sampling paradigm that goes against the common knowledge in data acquisition]
This article surveys the theory of compressive sampling, also known as compressed sensing or CS, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use. To make this possible, CS relies on two principles: sparsity, which pertains to the signals of interest, and incoherence, which pertains to the sensing modality.
Our intent in this article is to overview the basic CS theory that emerged in the works [1]–[3], present the key mathematical ideas underlying this theory, and survey a couple of important results in the field. Our goal is to explain CS as plainly as possible, and so our article is mainly of a tutorial nature. One of the charms of this theory is that it draws from various subdisciplines within the applied mathematical sciences, most notably probability theory. In this review, we have decided to highlight this aspect and especially the fact that randomness can — perhaps surprisingly — lead to very effective sensing mechanisms. We will also discuss significant implications, explain why CS is a concrete protocol for sensing and compressing data simultaneously (thus the name), and conclude our tour by reviewing important applications
One-bit Distributed Sensing and Coding for Field Estimation in Sensor Networks
This paper formulates and studies a general distributed field reconstruction
problem using a dense network of noisy one-bit randomized scalar quantizers in
the presence of additive observation noise of unknown distribution. A
constructive quantization, coding, and field reconstruction scheme is developed
and an upper-bound to the associated mean squared error (MSE) at any point and
any snapshot is derived in terms of the local spatio-temporal smoothness
properties of the underlying field. It is shown that when the noise, sensor
placement pattern, and the sensor schedule satisfy certain weak technical
requirements, it is possible to drive the MSE to zero with increasing sensor
density at points of field continuity while ensuring that the per-sensor
bitrate and sensing-related network overhead rate simultaneously go to zero.
The proposed scheme achieves the order-optimal MSE versus sensor density
scaling behavior for the class of spatially constant spatio-temporal fields.Comment: Fixed typos, otherwise same as V2. 27 pages (in one column review
format), 4 figures. Submitted to IEEE Transactions on Signal Processing.
Current version is updated for journal submission: revised author list,
modified formulation and framework. Previous version appeared in Proceedings
of Allerton Conference On Communication, Control, and Computing 200
Hamming Compressed Sensing
Compressed sensing (CS) and 1-bit CS cannot directly recover quantized
signals and require time consuming recovery. In this paper, we introduce
\textit{Hamming compressed sensing} (HCS) that directly recovers a k-bit
quantized signal of dimensional from its 1-bit measurements via invoking
times of Kullback-Leibler divergence based nearest neighbor search.
Compared with CS and 1-bit CS, HCS allows the signal to be dense, takes
considerably less (linear) recovery time and requires substantially less
measurements (). Moreover, HCS recovery can accelerate the
subsequent 1-bit CS dequantizer. We study a quantized recovery error bound of
HCS for general signals and "HCS+dequantizer" recovery error bound for sparse
signals. Extensive numerical simulations verify the appealing accuracy,
robustness, efficiency and consistency of HCS.Comment: 33 pages, 8 figure
Sub-Nyquist Sampling: Bridging Theory and Practice
Sampling theory encompasses all aspects related to the conversion of
continuous-time signals to discrete streams of numbers. The famous
Shannon-Nyquist theorem has become a landmark in the development of digital
signal processing. In modern applications, an increasingly number of functions
is being pushed forward to sophisticated software algorithms, leaving only
those delicate finely-tuned tasks for the circuit level.
In this paper, we review sampling strategies which target reduction of the
ADC rate below Nyquist. Our survey covers classic works from the early 50's of
the previous century through recent publications from the past several years.
The prime focus is bridging theory and practice, that is to pinpoint the
potential of sub-Nyquist strategies to emerge from the math to the hardware. In
that spirit, we integrate contemporary theoretical viewpoints, which study
signal modeling in a union of subspaces, together with a taste of practical
aspects, namely how the avant-garde modalities boil down to concrete signal
processing systems. Our hope is that this presentation style will attract the
interest of both researchers and engineers in the hope of promoting the
sub-Nyquist premise into practical applications, and encouraging further
research into this exciting new frontier.Comment: 48 pages, 18 figures, to appear in IEEE Signal Processing Magazin
Sampling from a system-theoretic viewpoint: Part II - Noncausal solutions
This paper puts to use concepts and tools introduced in Part I to address a wide spectrum of noncausal sampling and reconstruction problems. Particularly, we follow the system-theoretic paradigm by using systems as signal generators to account for available information and system norms (L2 and L∞) as performance measures. The proposed optimization-based approach recovers many known solutions, derived hitherto by different methods, as special cases under different assumptions about acquisition or reconstructing devices (e.g., polynomial and exponential cardinal splines for fixed samplers and the Sampling Theorem and its modifications in the case when both sampler and interpolator are design parameters). We also derive new results, such as versions of the Sampling Theorem for downsampling and reconstruction from noisy measurements, the continuous-time invariance of a wide class of optimal sampling-and-reconstruction circuits, etcetera
Xampling: Signal Acquisition and Processing in Union of Subspaces
We introduce Xampling, a unified framework for signal acquisition and
processing of signals in a union of subspaces. The main functions of this
framework are two. Analog compression that narrows down the input bandwidth
prior to sampling with commercial devices. A nonlinear algorithm then detects
the input subspace prior to conventional signal processing. A representative
union model of spectrally-sparse signals serves as a test-case to study these
Xampling functions. We adopt three metrics for the choice of analog
compression: robustness to model mismatch, required hardware accuracy and
software complexities. We conduct a comprehensive comparison between two
sub-Nyquist acquisition strategies for spectrally-sparse signals, the random
demodulator and the modulated wideband converter (MWC), in terms of these
metrics and draw operative conclusions regarding the choice of analog
compression. We then address lowrate signal processing and develop an algorithm
for that purpose that enables convenient signal processing at sub-Nyquist rates
from samples obtained by the MWC. We conclude by showing that a variety of
other sampling approaches for different union classes fit nicely into our
framework.Comment: 16 pages, 9 figures, submitted to IEEE for possible publicatio
- …