3,268 research outputs found
Spectral tensor-train decomposition
The accurate approximation of high-dimensional functions is an essential task
in uncertainty quantification and many other fields. We propose a new function
approximation scheme based on a spectral extension of the tensor-train (TT)
decomposition. We first define a functional version of the TT decomposition and
analyze its properties. We obtain results on the convergence of the
decomposition, revealing links between the regularity of the function, the
dimension of the input space, and the TT ranks. We also show that the
regularity of the target function is preserved by the univariate functions
(i.e., the "cores") comprising the functional TT decomposition. This result
motivates an approximation scheme employing polynomial approximations of the
cores. For functions with appropriate regularity, the resulting
\textit{spectral tensor-train decomposition} combines the favorable
dimension-scaling of the TT decomposition with the spectral convergence rate of
polynomial approximations, yielding efficient and accurate surrogates for
high-dimensional functions. To construct these decompositions, we use the
sampling algorithm \texttt{TT-DMRG-cross} to obtain the TT decomposition of
tensors resulting from suitable discretizations of the target function. We
assess the performance of the method on a range of numerical examples: a
modifed set of Genz functions with dimension up to , and functions with
mixed Fourier modes or with local features. We observe significant improvements
in performance over an anisotropic adaptive Smolyak approach. The method is
also used to approximate the solution of an elliptic PDE with random input
data. The open source software and examples presented in this work are
available online.Comment: 33 pages, 19 figure
Bivariate Lagrange interpolation at the node points of Lissajous curves - the degenerate case
In this article, we study bivariate polynomial interpolation on the node
points of degenerate Lissajous figures. These node points form Chebyshev
lattices of rank and are generalizations of the well-known Padua points. We
show that these node points allow unique interpolation in appropriately defined
spaces of polynomials and give explicit formulas for the Lagrange basis
polynomials. Further, we prove mean and uniform convergence of the
interpolating schemes. For the uniform convergence the growth of the Lebesgue
constant has to be taken into consideration. It turns out that this growth is
of logarithmic nature.Comment: 26 pages, 6 figures, 1 tabl
Time and spectral domain relative entropy: A new approach to multivariate spectral estimation
The concept of spectral relative entropy rate is introduced for jointly
stationary Gaussian processes. Using classical information-theoretic results,
we establish a remarkable connection between time and spectral domain relative
entropy rates. This naturally leads to a new spectral estimation technique
where a multivariate version of the Itakura-Saito distance is employed}. It may
be viewed as an extension of the approach, called THREE, introduced by Byrnes,
Georgiou and Lindquist in 2000 which, in turn, followed in the footsteps of the
Burg-Jaynes Maximum Entropy Method. Spectral estimation is here recast in the
form of a constrained spectrum approximation problem where the distance is
equal to the processes relative entropy rate. The corresponding solution
entails a complexity upper bound which improves on the one so far available in
the multichannel framework. Indeed, it is equal to the one featured by THREE in
the scalar case. The solution is computed via a globally convergent matricial
Newton-type algorithm. Simulations suggest the effectiveness of the new
technique in tackling multivariate spectral estimation tasks, especially in the
case of short data records.Comment: 32 pages, submitted for publicatio
A new family of high-resolution multivariate spectral estimators
In this paper, we extend the Beta divergence family to multivariate power
spectral densities. Similarly to the scalar case, we show that it smoothly
connects the multivariate Kullback-Leibler divergence with the multivariate
Itakura-Saito distance. We successively study a spectrum approximation problem,
based on the Beta divergence family, which is related to a multivariate
extension of the THREE spectral estimation technique. It is then possible to
characterize a family of solutions to the problem. An upper bound on the
complexity of these solutions will also be provided. Simulations suggest that
the most suitable solution of this family depends on the specific features
required from the estimation problem
On the Continuity of Multivariate Lagrange Interpolation at Chung-Yao Lattices
We give a natural geometric condition that ensures that sequences of
Chung-Yao interpolation polynomials (of fixed degree) of sufficiently
differentiable functions converge to a Taylor polynomial
Hermite matrix in Lagrange basis for scaling static output feedback polynomial matrix inequalities
Using Hermite's formulation of polynomial stability conditions, static output
feedback (SOF) controller design can be formulated as a polynomial matrix
inequality (PMI), a (generally nonconvex) nonlinear semidefinite programming
problem that can be solved (locally) with PENNON, an implementation of a
penalty method. Typically, Hermite SOF PMI problems are badly scaled and
experiments reveal that this has a negative impact on the overall performance
of the solver. In this note we recall the algebraic interpretation of Hermite's
quadratic form as a particular Bezoutian and we use results on polynomial
interpolation to express the Hermite PMI in a Lagrange polynomial basis, as an
alternative to the conventional power basis. Numerical experiments on benchmark
problem instances show the substantial improvement brought by the approach, in
terms of problem scaling, number of iterations and convergence behavior of
PENNON
- …