1,098 research outputs found
Reconstruction of Multidimensional Signals from Irregular Noisy Samples
We focus on a multidimensional field with uncorrelated spectrum, and study
the quality of the reconstructed signal when the field samples are irregularly
spaced and affected by independent and identically distributed noise. More
specifically, we apply linear reconstruction techniques and take the mean
square error (MSE) of the field estimate as a metric to evaluate the signal
reconstruction quality. We find that the MSE analysis could be carried out by
using the closed-form expression of the eigenvalue distribution of the matrix
representing the sampling system. Unfortunately, such distribution is still
unknown. Thus, we first derive a closed-form expression of the distribution
moments, and we find that the eigenvalue distribution tends to the
Marcenko-Pastur distribution as the field dimension goes to infinity. Finally,
by using our approach, we derive a tight approximation to the MSE of the
reconstructed field.Comment: To appear on IEEE Transactions on Signal Processing, 200
A Levinson-Galerkin algorithm for regularized trigonometric approximation
Trigonometric polynomials are widely used for the approximation of a smooth
function from a set of nonuniformly spaced samples
. If the samples are perturbed by noise, controlling
the smoothness of the trigonometric approximation becomes an essential issue to
avoid overfitting and underfitting of the data. Using the polynomial degree as
regularization parameter we derive a multi-level algorithm that iteratively
adapts to the least squares solution of optimal smoothness. The proposed
algorithm computes the solution in at most operations (
being the polynomial degree of the approximation) by solving a family of nested
Toeplitz systems. It is shown how the presented method can be extended to
multivariate trigonometric approximation. We demonstrate the performance of the
algorithm by applying it in echocardiography to the recovery of the boundary of
the Left Ventricle
Signal Reconstruction From Nonuniform Samples Using Prolate Spheroidal Wave Functions: Theory and Application
Nonuniform sampling occurs in many applications due to imperfect sensors, mismatchedclocks or event-triggered phenomena. Indeed, natural images, biomedical responses andsensor network transmission have bursty structure so in order to obtain samples that correspondto the information content of the signal, one needs to collect more samples when thesignal changes fast and fewer samples otherwise which creates nonuniformly distibuted samples.On the other hand, with the advancements in the integrated circuit technology, smallscale and ultra low-power devices are available for several applications ranging from invasivebiomedical implants to environmental monitoring. However the advancements in the devicetechnologies also require data acquisition methods to be changed from the uniform (clockbased, synchronous) to nonuniform (clockless, asynchronous) processing. An important advancementis in the data reconstruction theorems from sub-Nyquist rate samples which wasrecently introduced as compressive sensing and that redenes the uncertainty principle. Inthis dissertation, we considered the problem of signal reconstruction from nonuniform samples.Our method is based on the Prolate Spheroidal Wave Functions (PSWF) which can beused in the reconstruction of time-limited and essentially band-limited signals from missingsamples, in event-driven sampling and in the case of asynchronous sigma delta modulation.We provide an implementable, general reconstruction framework for the issues relatedto reduction in the number of samples and estimation of nonuniform sample times. We alsoprovide a reconstruction method for level crossing sampling with regularization. Another way is to use projection onto convex sets (POCS) method. In this method we combinea time-frequency approach with the POCS iterative method and use PSWF for the reconstructionwhen there are missing samples. Additionally, we realize time decoding modulationfor an asynchronous sigma delta modulator which has potential applications in low-powerbiomedical implants
Multi-GPU maximum entropy image synthesis for radio astronomy
The maximum entropy method (MEM) is a well known deconvolution technique in
radio-interferometry. This method solves a non-linear optimization problem with
an entropy regularization term. Other heuristics such as CLEAN are faster but
highly user dependent. Nevertheless, MEM has the following advantages: it is
unsupervised, it has a statistical basis, it has a better resolution and better
image quality under certain conditions. This work presents a high performance
GPU version of non-gridding MEM, which is tested using real and simulated data.
We propose a single-GPU and a multi-GPU implementation for single and
multi-spectral data, respectively. We also make use of the Peer-to-Peer and
Unified Virtual Addressing features of newer GPUs which allows to exploit
transparently and efficiently multiple GPUs. Several ALMA data sets are used to
demonstrate the effectiveness in imaging and to evaluate GPU performance. The
results show that a speedup from 1000 to 5000 times faster than a sequential
version can be achieved, depending on data and image size. This allows to
reconstruct the HD142527 CO(6-5) short baseline data set in 2.1 minutes,
instead of 2.5 days that takes a sequential version on CPU.Comment: 11 pages, 13 figure
Group Iterative Spectrum Thresholding for Super-Resolution Sparse Spectral Selection
Recently, sparsity-based algorithms are proposed for super-resolution
spectrum estimation. However, to achieve adequately high resolution in
real-world signal analysis, the dictionary atoms have to be close to each other
in frequency, thereby resulting in a coherent design. The popular convex
compressed sensing methods break down in presence of high coherence and large
noise. We propose a new regularization approach to handle model collinearity
and obtain parsimonious frequency selection simultaneously. It takes advantage
of the pairing structure of sine and cosine atoms in the frequency dictionary.
A probabilistic spectrum screening is also developed for fast computation in
high dimensions. A data-resampling version of high-dimensional Bayesian
Information Criterion is used to determine the regularization parameters.
Experiments show the efficacy and efficiency of the proposed algorithms in
challenging situations with small sample size, high frequency resolution, and
low signal-to-noise ratio
The curvelet transform for image denoising
We describe approximate digital implementations of two new mathematical transforms, namely, the ridgelet transform and the curvelet transform. Our implementations offer exact reconstruction, stability against perturbations, ease of implementation, and low computational complexity. A central tool is Fourier-domain computation of an approximate digital Radon transform. We introduce a very simple interpolation in the Fourier space which takes Cartesian samples and yields samples on a rectopolar grid, which is a pseudo-polar sampling set based on a concentric squares geometry. Despite the crudeness of our interpolation, the visual performance is surprisingly good. Our ridgelet transform applies to the Radon transform a special overcomplete wavelet pyramid whose wavelets have compact support in the frequency domain. Our curvelet transform uses our ridgelet transform as a component step, and implements curvelet subbands using a filter bank of a` trous wavelet filters. Our philosophy throughout is that transforms should be overcomplete, rather than critically sampled. We apply these digital transforms to the denoising of some standard images embedded in white noise. In the tests reported here, simple thresholding of the curvelet coefficients is very competitive with "state of the art" techniques based on wavelets, including thresholding of decimated or undecimated wavelet transforms and also including tree-based Bayesian posterior mean methods. Moreover, the curvelet reconstructions exhibit higher perceptual quality than wavelet-based reconstructions, offering visually sharper images and, in particular, higher quality recovery of edges and of faint linear and curvilinear features. Existing theory for curvelet and ridgelet transforms suggests that these new approaches can outperform wavelet methods in certain image reconstruction problems. The empirical results reported here are in encouraging agreement
- …