166 research outputs found
Sampling and Super-resolution of Sparse Signals Beyond the Fourier Domain
Recovering a sparse signal from its low-pass projections in the Fourier
domain is a problem of broad interest in science and engineering and is
commonly referred to as super-resolution. In many cases, however, Fourier
domain may not be the natural choice. For example, in holography, low-pass
projections of sparse signals are obtained in the Fresnel domain. Similarly,
time-varying system identification relies on low-pass projections on the space
of linear frequency modulated signals. In this paper, we study the recovery of
sparse signals from low-pass projections in the Special Affine Fourier
Transform domain (SAFT). The SAFT parametrically generalizes a number of well
known unitary transformations that are used in signal processing and optics. In
analogy to the Shannon's sampling framework, we specify sampling theorems for
recovery of sparse signals considering three specific cases: (1) sampling with
arbitrary, bandlimited kernels, (2) sampling with smooth, time-limited kernels
and, (3) recovery from Gabor transform measurements linked with the SAFT
domain. Our work offers a unifying perspective on the sparse sampling problem
which is compatible with the Fourier, Fresnel and Fractional Fourier domain
based results. In deriving our results, we introduce the SAFT series (analogous
to the Fourier series) and the short time SAFT, and study convolution theorems
that establish a convolution--multiplication property in the SAFT domain.Comment: 42 pages, 3 figures, manuscript under revie
Functional deconvolution in a periodic setting: Uniform case
We extend deconvolution in a periodic setting to deal with functional data.
The resulting functional deconvolution model can be viewed as a generalization
of a multitude of inverse problems in mathematical physics where one needs to
recover initial or boundary conditions on the basis of observations from a
noisy solution of a partial differential equation. In the case when it is
observed at a finite number of distinct points, the proposed functional
deconvolution model can also be viewed as a multichannel deconvolution model.
We derive minimax lower bounds for the -risk in the proposed functional
deconvolution model when is assumed to belong to a Besov ball and
the blurring function is assumed to possess some smoothness properties,
including both regular-smooth and super-smooth convolutions. Furthermore, we
propose an adaptive wavelet estimator of that is asymptotically
optimal (in the minimax sense), or near-optimal within a logarithmic factor, in
a wide range of Besov balls. In addition, we consider a discretization of the
proposed functional deconvolution model and investigate when the availability
of continuous data gives advantages over observations at the asymptotically
large number of points. As an illustration, we discuss particular examples for
both continuous and discrete settings.Comment: Published in at http://dx.doi.org/10.1214/07-AOS552 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Recommended from our members
Nonperiodic sampling theorems and filter banks
Sampling theorems provide exact interpolation formulas for bandlimited
functions. They play a fundamental role in signal processing. A function is called
bandlimited if its Fourier transform vanishes outside a compact set. A generalized
sampling theorem in the framework of locally compact Abelian groups is presented.
Sampling sets are finite unions of cosets of closed discrete subgroups. Such sampling
sets are not necessarily periodic and cannot be treated in that setting. An exact reconstruction
formula is found for the case that the support of the Fourier transform
of the function which needs to be reconstructed satisfies certain conditions.
The notion of a filter bank is generalized in the framework of locally compact Abelian
groups. Conditions for perfect reconstruction are derived. It is shown that this
theory includes some generalized sampling theorems and results in multisensor deconvolution
problems as special cases
Wavelet-based digital image restoration
Digital image restoration is a fundamental image processing problem with underlying physical motivations. A digital imaging system is unable to generate a continuum of ideal pointwise measurements of the input scene. Instead, the acquired digital image is an array of measured values. Generally, algorithms can be developed to remove a significant part of the error associated with these measure image values provided a proper model of the image acquisition system is used as the basis for the algorithm development. The continuous/discrete/continuous (C/D/C) model has proven to be a better alternative compared to the relatively incomplete image acquisition models commonly used in image restoration. Because it is more comprehensive, the C/D/C model offers a basis for developing significantly better restoration filters. The C/D/C model uses Fourier domain techniques to account for system blur at the image formation level, for the potentially important effects of aliasing, for additive noise and for blur at the image reconstruction level.;This dissertation develops a wavelet-based representation for the C/D/C model, including a theoretical treatment of convolution and sampling. This wavelet-based C/D/C model representation is used to formulate the image restoration problem as a generalized least squares problem. The use of wavelets discretizes the image acquisition kernel, and in this way the image restoration problem is also discrete. The generalized least squares problem is solved using the singular value decomposition. Because image restoration is only meaningful in the presence of noise, restoration solutions must deal with the issue of noise amplification. In this dissertation the treatment of noise is addressed with a restoration parameter related to the singular values of the discrete image acquisition kernel. The restoration procedure is assessed using simulated scenes and real scenes with various degrees of smoothness, in the presence of noise. All these scenes are restoration-challenging because they have a considerable amount of spatial detail at small scale. An empirical procedure that provides a good initial guess of the restoration parameter is devised
Streaming Reconstruction from Non-uniform Samples
We present an online algorithm for reconstructing a signal from a set of
non-uniform samples. By representing the signal using compactly supported basis
functions, we show how estimating the expansion coefficients using
least-squares can be implemented in a streaming manner: as batches of samples
over subsequent time intervals are presented, the algorithm forms an initial
estimate of the signal over the sampling interval then updates its estimates
over previous intervals. We give conditions under which this reconstruction
procedure is stable and show that the least-squares estimates in each interval
converge exponentially, meaning that the updates can be performed with finite
memory with almost no loss in accuracy. We also discuss how our framework
extends to more general types of measurements including time-varying
convolution with a compactly supported kernel
A probabilistic compressive sensing framework with applications to ultrasound signal processing
The field of Compressive Sensing (CS) has provided algorithms to reconstruct signals from a much lower number of measurements than specified by the Nyquist-Shannon theorem. There are two fundamental concepts underpinning the field of CS. The first is the use of random transformations to project high-dimensional measurements onto a much lower-dimensional domain. The second is the use of sparse regression to reconstruct the original signal. This assumes that a sparse representation exists for this signal in some known domain, manifested by a dictionary. The original formulation for CS specifies the use of an penalised regression method, the Lasso. Whilst this has worked well in literature, it suffers from two main drawbacks. First, the level of sparsity must be specified by the user, or tuned using sub-optimal approaches. Secondly, and most importantly, the Lasso is not probabilistic; it cannot quantify uncertainty in the signal reconstruction. This paper aims to address these two issues; it presents a framework for performing compressive sensing based on sparse Bayesian learning. Specifically, the proposed framework introduces the use of the Relevance Vector Machine (RVM), an established sparse kernel regression method, as the signal reconstruction step within the standard CS methodology. This framework is developed within the context of ultrasound signal processing in mind, and so examples and results of compression and reconstruction of ultrasound pulses are presented. The dictionary learning strategy is key to the successful application of any CS framework and even more so in the probabilistic setting used here. Therefore, a detailed discussion of this step is also included in the paper. The key contributions of this paper are a framework for a Bayesian approach to compressive sensing which is computationally efficient, alongside a discussion of uncertainty quantification in CS and different strategies for dictionary learning. The methods are demonstrated on an example dataset from collected from an aerospace composite panel. Being able to quantify uncertainty on signal reconstruction reveals that this grows as the level of compression increases. This is key when deciding appropriate compression levels, or whether to trust a reconstructed signal in applications of engineering and scientific interest
On convergence rates equivalency and sampling strategies in functional deconvolution models
Using the asymptotical minimax framework, we examine convergence rates
equivalency between a continuous functional deconvolution model and its
real-life discrete counterpart over a wide range of Besov balls and for the
-risk. For this purpose, all possible models are divided into three
groups. For the models in the first group, which we call uniform, the
convergence rates in the discrete and the continuous models coincide no matter
what the sampling scheme is chosen, and hence the replacement of the discrete
model by its continuous counterpart is legitimate. For the models in the second
group, to which we refer as regular, one can point out the best sampling
strategy in the discrete model, but not every sampling scheme leads to the same
convergence rates; there are at least two sampling schemes which deliver
different convergence rates in the discrete model (i.e., at least one of the
discrete models leads to convergence rates that are different from the
convergence rates in the continuous model). The third group consists of models
for which, in general, it is impossible to devise the best sampling strategy;
we call these models irregular. We formulate the conditions when each of these
situations takes place. In the regular case, we not only point out the number
and the selection of sampling points which deliver the fastest convergence
rates in the discrete model but also investigate when, in the case of an
arbitrary sampling scheme, the convergence rates in the continuous model
coincide or do not coincide with the convergence rates in the discrete model.
We also study what happens if one chooses a uniform, or a more general
pseudo-uniform, sampling scheme which can be viewed as an intuitive replacement
of the continuous model.Comment: Published in at http://dx.doi.org/10.1214/09-AOS767 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Applied Harmonic Analysis and Sparse Approximation
Efficiently analyzing functions, in particular multivariate functions, is a key problem in applied mathematics. The area of applied harmonic analysis has a significant impact on this problem by providing methodologies both for theoretical questions and for a wide range of applications in technology and science, such as image processing. Approximation theory, in particular the branch of the theory of sparse approximations, is closely intertwined with this area with a lot of recent exciting developments in the intersection of both. Research topics typically also involve related areas such as convex optimization, probability theory, and Banach space geometry. The workshop was the continuation of a first event in 2012 and intended to bring together world leading experts in these areas, to report on recent developments, and to foster new developments and collaborations
- …