10 research outputs found
The Generalized Operator Based Prony Method
The generalized Prony method introduced by Peter & Plonka (2013) is a
reconstruction technique for a large variety of sparse signal models that can
be represented as sparse expansions into eigenfunctions of a linear operator
. However, this procedure requires the evaluation of higher powers of the
linear operator that are often expensive to provide.
In this paper we propose two important extensions of the generalized Prony
method that simplify the acquisition of the needed samples essentially and at
the same time can improve the numerical stability of the method. The first
extension regards the change of operators from to , where
is an analytic function, while and possess the same
set of eigenfunctions. The goal is now to choose such that the powers
of are much simpler to evaluate than the powers of . The second
extension concerns the choice of the sampling functionals. We show, how new
sets of different sampling functionals can be applied with the goal to
reduce the needed number of powers of the operator (resp. ) in
the sampling scheme and to simplify the acquisition process for the recovery
method.Comment: 31 pages, 2 figure
Nonlinear approximation in bounded orthonormal product bases
We present a dimension-incremental algorithm for the nonlinear approximation
of high-dimensional functions in an arbitrary bounded orthonormal product
basis. Our goal is to detect a suitable truncation of the basis expansion of
the function, where the corresponding basis support is assumed to be unknown.
Our method is based on point evaluations of the considered function and
adaptively builds an index set of a suitable basis support such that the
approximately largest basis coefficients are still included. For this purpose,
the algorithm only needs a suitable search space that contains the desired
index set. Throughout the work, there are various minor modifications of the
algorithm discussed as well, which may yield additional benefits in several
situations. For the first time, we provide a proof of a detection guarantee for
such an index set in the function approximation case under certain assumptions
on the sub-methods used within our algorithm, which can be used as a foundation
for similar statements in various other situations as well. Some numerical
examples in different settings underline the effectiveness and accuracy of our
method
Parametric spectral analysis: scale and shift
We introduce the paradigm of dilation and translation for use in the spectral
analysis of complex-valued univariate or multivariate data. The new procedure
stems from a search on how to solve ambiguity problems in this analysis, such
as aliasing because of too coarsely sampled data, or collisions in projected
data, which may be solved by a translation of the sampling locations.
In Section 2 both dilation and translation are first presented for the
classical one-dimensional exponential analysis. In the subsequent Sections 3--7
the paradigm is extended to more functions, among which the trigonometric
functions cosine, sine, the hyperbolic cosine and sine functions, the Chebyshev
and spread polynomials, the sinc, gamma and Gaussian function, and several
multivariate versions of all of the above.
Each of these function classes needs a tailored approach, making optimal use
of the properties of the base function used in the considered sparse
interpolation problem. With each of the extensions a structured linear matrix
pencil is associated, immediately leading to a computational scheme for the
spectral analysis, involving a generalized eigenvalue problem and several
structured linear systems.
In Section 8 we illustrate the new methods in several examples: fixed width
Gaussian distribution fitting, sparse cardinal sine or sinc interpolation, and
lacunary or supersparse Chebyshev polynomial interpolation
Recommended from our members
Learning Theory and Approximation
Learning theory studies data structures from samples and aims at understanding unknown function relations behind them. This leads to interesting theoretical problems which can be often attacked with methods from Approximation Theory. This workshop - the second one of this type at the MFO - has concentrated on the following recent topics: Learning of manifolds and the geometry of data; sparsity and dimension reduction; error analysis and algorithmic aspects, including kernel based methods for regression and classification; application of multiscale aspects and of refinement algorithms to learning
Recommended from our members
Structured Function Systems and Applications
Quite a few independent investigations have been devoted recently to the analysis and construction of structured function systems such as e.g. wavelet frames with compact support, Gabor frames, refinable functions in the context of subdivision and so on. However, difficult open questions about the existence, properties and general efficient construction methods of such structured function systems have been left without satisfactory answers. The goal of the workshop was to bring together experts in approximation theory, real algebraic geometry, complex analysis, frame theory and optimization to address key open questions on the subject in a highly interdisciplinary, unique of its kind, exchange
Representation of sparse Legendre expansions
Abstract We derive a new deterministic algorithm for the computation of a sparse Legendre expansion f of degree N with M N nonzero terms from only 2M function resp. derivative values f (j) (1), j = 0, . . . , 2M − 1 of this expansion. For this purpose we apply a special annihilating filter method that allows us to separate the computation of the indices of the active Legendre basis polynomials and the evaluation of the corresponding coefficients
Representation of sparse Legendre expansions
Abstract We derive a new deterministic algorithm for the computation of a sparse Legendre expansion f of degree N with M N nonzero terms from only 2M + 1 function resp. derivative values f (j) (1), j = 0, . . . , 2M of this expansion. For this purpose we apply a special annihilating filter method that allows us to separate the computation of the indices of the active Legendre basis polynomials and the evaluation of the corresponding coefficients