1,414 research outputs found
A Levinson-Galerkin algorithm for regularized trigonometric approximation
Trigonometric polynomials are widely used for the approximation of a smooth
function from a set of nonuniformly spaced samples
. If the samples are perturbed by noise, controlling
the smoothness of the trigonometric approximation becomes an essential issue to
avoid overfitting and underfitting of the data. Using the polynomial degree as
regularization parameter we derive a multi-level algorithm that iteratively
adapts to the least squares solution of optimal smoothness. The proposed
algorithm computes the solution in at most operations (
being the polynomial degree of the approximation) by solving a family of nested
Toeplitz systems. It is shown how the presented method can be extended to
multivariate trigonometric approximation. We demonstrate the performance of the
algorithm by applying it in echocardiography to the recovery of the boundary of
the Left Ventricle
Measure What Should be Measured: Progress and Challenges in Compressive Sensing
Is compressive sensing overrated? Or can it live up to our expectations? What
will come after compressive sensing and sparsity? And what has Galileo Galilei
got to do with it? Compressive sensing has taken the signal processing
community by storm. A large corpus of research devoted to the theory and
numerics of compressive sensing has been published in the last few years.
Moreover, compressive sensing has inspired and initiated intriguing new
research directions, such as matrix completion. Potential new applications
emerge at a dazzling rate. Yet some important theoretical questions remain
open, and seemingly obvious applications keep escaping the grip of compressive
sensing. In this paper I discuss some of the recent progress in compressive
sensing and point out key challenges and opportunities as the area of
compressive sensing and sparse representations keeps evolving. I also attempt
to assess the long-term impact of compressive sensing
Approximation of dual Gabor frames, window decay, and wireless communications
We consider three problems for Gabor frames that have recently received much
attention. The first problem concerns the approximation of dual Gabor frames in
by finite-dimensional methods. Utilizing Wexler-Raz type duality
relations we derive a method to approximate the dual Gabor frame, that is much
simpler than previously proposed techniques. Furthermore it enables us to give
estimates for the approximation rate when the dimension of the finite model
approaches infinity. The second problem concerns the relation between the decay
of the window function and its dual . Based on results on
commutative Banach algebras and Laurent operators we derive a general condition
under which the dual inherits the decay properties of . The third
problem concerns the design of pulse shapes for orthogonal frequency division
multiplex (OFDM) systems for time- and frequency dispersive channels. In
particular, we provide a theoretical foundation for a recently proposed
algorithm to construct orthogonal transmission functions that are well
localized in the time-frequency plane
Rates of convergence for the approximation of dual shift-invariant systems in
A shift-invariant system is a collection of functions of the
form . Such systems play an important role in
time-frequency analysis and digital signal processing. A principal problem is
to find a dual system such that each
function can be written as . The
mathematical theory usually addresses this problem in infinite dimensions
(typically in or ), whereas numerical methods have to operate
with a finite-dimensional model. Exploiting the link between the frame operator
and Laurent operators with matrix-valued symbol, we apply the finite section
method to show that the dual functions obtained by solving a finite-dimensional
problem converge to the dual functions of the original infinite-dimensional
problem in . For compactly supported (FIR filter banks) we
prove an exponential rate of convergence and derive explicit expressions for
the involved constants. Further we investigate under which conditions one can
replace the discrete model of the finite section method by the periodic
discrete model, which is used in many numerical procedures. Again we provide
explicit estimates for the speed of convergence. Some remarks on tight frames
complete the paper
Painless Breakups -- Efficient Demixing of Low Rank Matrices
Assume we are given a sum of linear measurements of different rank-
matrices of the form . When and under
which conditions is it possible to extract (demix) the individual matrices
from the single measurement vector ? And can we do the demixing
numerically efficiently? We present two computationally efficient algorithms
based on hard thresholding to solve this low rank demixing problem. We prove
that under suitable conditions these algorithms are guaranteed to converge to
the correct solution at a linear rate. We discuss applications in connection
with quantum tomography and the Internet-of-Things. Numerical simulations
demonstrate empirically the performance of the proposed algorithms
A multi-level algorithm for the solution of moment problems
We study numerical methods for the solution of general linear moment
problems, where the solution belongs to a family of nested subspaces of a
Hilbert space. Multi-level algorithms, based on the conjugate gradient method
and the Landweber--Richardson method are proposed that determine the "optimal"
reconstruction level a posteriori from quantities that arise during the
numerical calculations. As an important example we discuss the reconstruction
of band-limited signals from irregularly spaced noisy samples, when the actual
bandwidth of the signal is not available. Numerical examples show the
usefulness of the proposed algorithms
Sparsity Enhanced Decision Feedback Equalization
For single-carrier systems with frequency domain equalization, decision
feedback equalization (DFE) performs better than linear equalization and has
much lower computational complexity than sequence maximum likelihood detection.
The main challenge in DFE is the feedback symbol selection rule. In this paper,
we give a theoretical framework for a simple, sparsity based thresholding
algorithm. We feed back multiple symbols in each iteration, so the algorithm
converges fast and has a low computational cost. We show how the initial
solution can be obtained via convex relaxation instead of linear equalization,
and illustrate the impact that the choice of the initial solution has on the
bit error rate performance of our algorithm. The algorithm is applicable in
several existing wireless communication systems (SC-FDMA, MC-CDMA, MIMO-OFDM).
Numerical results illustrate significant performance improvement in terms of
bit error rate compared to the MMSE solution
A randomized Kaczmarz algorithm with exponential convergence
The Kaczmarz method for solving linear systems of equations is an iterative
algorithm that has found many applications ranging from computer tomography to
digital signal processing. Despite the popularity of this method, useful
theoretical estimates for its rate of convergence are still scarce. We
introduce a randomized version of the Kaczmarz method for consistent,
overdetermined linear systems and we prove that it converges with expected
exponential rate. Furthermore, this is the first solver whose rate does not
depend on the number of equations in the system. The solver does not even need
to know the whole system, but only a small random part of it. It thus
outperforms all previously known methods on general extremely overdetermined
systems. Even for moderately overdetermined systems, numerical simulations as
well as theoretical analysis reveal that our algorithm can converge faster than
the celebrated conjugate gradient algorithm. Furthermore, our theory and
numerical simulations confirm a prediction of Feichtinger et al. in the context
of reconstructing bandlimited functions from nonuniform sampling
Fast multi-dimensional scattered data approximation with Neumann boundary conditions
An important problem in applications is the approximation of a function
from a finite set of randomly scattered data . A common and powerful
approach is to construct a trigonometric least squares approximation based on
the set of exponentials . This leads to fast numerical
algorithms, but suffers from disturbing boundary effects due to the underlying
periodicity assumption on the data, an assumption that is rarely satisfied in
practice. To overcome this drawback we impose Neumann boundary conditions on
the data. This implies the use of cosine polynomials as basis
functions. We show that scattered data approximation using cosine polynomials
leads to a least squares problem involving certain Toeplitz+Hankel matrices. We
derive estimates on the condition number of these matrices. Unlike other
Toeplitz+Hankel matrices, the Toeplitz+Hankel matrices arising in our context
cannot be diagonalized by the discrete cosine transform, but they still allow a
fast matrix-vector multiplication via DCT which gives rise to fast conjugate
gradient type algorithms. We show how the results can be generalized to higher
dimensions. Finally we demonstrate the performance of the proposed method by
applying it to a two-dimensional geophysical scattered data problem
- β¦