91,362 research outputs found
Estimating Signals with Finite Rate of Innovation from Noisy Samples: A Stochastic Algorithm
As an example of the recently-introduced concept of rate of innovation,
signals that are linear combinations of a finite number of Diracs per unit time
can be acquired by linear filtering followed by uniform sampling. However, in
reality, samples are rarely noiseless. In this paper, we introduce a novel
stochastic algorithm to reconstruct a signal with finite rate of innovation
from its noisy samples. Even though variants of this problem has been
approached previously, satisfactory solutions are only available for certain
classes of sampling kernels, for example kernels which satisfy the Strang-Fix
condition. In this paper, we consider the infinite-support Gaussian kernel,
which does not satisfy the Strang-Fix condition. Other classes of kernels can
be employed. Our algorithm is based on Gibbs sampling, a Markov chain Monte
Carlo (MCMC) method. Extensive numerical simulations demonstrate the accuracy
and robustness of our algorithm.Comment: Submitted to IEEE Transactions on Signal Processin
Sampling and Reconstruction of Shapes with Algebraic Boundaries
We present a sampling theory for a class of binary images with finite rate of
innovation (FRI). Every image in our model is the restriction of
\mathds{1}_{\{p\leq0\}} to the image plane, where \mathds{1} denotes the
indicator function and is some real bivariate polynomial. This particularly
means that the boundaries in the image form a subset of an algebraic curve with
the implicit polynomial . We show that the image parameters --i.e., the
polynomial coefficients-- satisfy a set of linear annihilation equations with
the coefficients being the image moments. The inherent sensitivity of the
moments to noise makes the reconstruction process numerically unstable and
narrows the choice of the sampling kernels to polynomial reproducing kernels.
As a remedy to these problems, we replace conventional moments with more stable
\emph{generalized moments} that are adjusted to the given sampling kernel. The
benefits are threefold: (1) it relaxes the requirements on the sampling
kernels, (2) produces annihilation equations that are robust at numerical
precision, and (3) extends the results to images with unbounded boundaries. We
further reduce the sensitivity of the reconstruction process to noise by taking
into account the sign of the polynomial at certain points, and sequentially
enforcing measurement consistency. We consider various numerical experiments to
demonstrate the performance of our algorithm in reconstructing binary images,
including low to moderate noise levels and a range of realistic sampling
kernels.Comment: 12 pages, 14 figure
Bound and Conquer: Improving Triangulation by Enforcing Consistency
We study the accuracy of triangulation in multi-camera systems with respect
to the number of cameras. We show that, under certain conditions, the optimal
achievable reconstruction error decays quadratically as more cameras are added
to the system. Furthermore, we analyse the error decay-rate of major
state-of-the-art algorithms with respect to the number of cameras. To this end,
we introduce the notion of consistency for triangulation, and show that
consistent reconstruction algorithms achieve the optimal quadratic decay, which
is asymptotically faster than some other methods. Finally, we present
simulations results supporting our findings. Our simulations have been
implemented in MATLAB and the resulting code is available in the supplementary
material.Comment: 8 pages, 4 figures, Submitted to IEEE Transactions on Pattern
Analysis and Machine Intelligenc
Ensemble prediction for nowcasting with a convection-permitting model—I: description of the system and the impact of radar-derived surface precipitation rates
A key strategy to improve the skill of quantitative predictions of precipitation, as well as hazardous weather such as severe thunderstorms and flash floods is to exploit the use of observations of convective activity (e.g. from radar). In this paper, a convection-permitting ensemble prediction system (EPS) aimed at addressing the problems of forecasting localized weather events with relatively short predictability time scale and based on a 1.5 km grid-length version of the Met Office Unified Model is presented. Particular attention is given to the impact of using predicted observations of radar-derived precipitation intensity in the ensemble transform Kalman filter (ETKF) used within the EPS. Our initial results based on the use of a 24-member ensemble of forecasts for two summer case studies show that the convective-scale EPS produces fairly reliable forecasts of temperature, horizontal winds and relative humidity at 1 h lead time, as evident from the inspection of rank histograms. On the other hand, the rank histograms seem also to show that the EPS generates too much spread for forecasts of (i) surface pressure and (ii) surface precipitation intensity. These may indicate that for (i) the value of surface pressure observation error standard deviation used to generate surface pressure rank histograms is too large and for (ii) may be the result of non-Gaussian precipitation observation errors. However, further investigations are needed to better understand these findings. Finally, the inclusion of predicted observations of precipitation from radar in the 24-member EPS considered in this paper does not seem to improve the 1-h lead time forecast skill
Sampling Sparse Signals on the Sphere: Algorithms and Applications
We propose a sampling scheme that can perfectly reconstruct a collection of
spikes on the sphere from samples of their lowpass-filtered observations.
Central to our algorithm is a generalization of the annihilating filter method,
a tool widely used in array signal processing and finite-rate-of-innovation
(FRI) sampling. The proposed algorithm can reconstruct spikes from
spatial samples. This sampling requirement improves over
previously known FRI sampling schemes on the sphere by a factor of four for
large . We showcase the versatility of the proposed algorithm by applying it
to three different problems: 1) sampling diffusion processes induced by
localized sources on the sphere, 2) shot noise removal, and 3) sound source
localization (SSL) by a spherical microphone array. In particular, we show how
SSL can be reformulated as a spherical sparse sampling problem.Comment: 14 pages, 8 figures, submitted to IEEE Transactions on Signal
Processin
- …