96 research outputs found
On certain multivariate Vandermonde determinants whose variables separate
We prove that for almost square tensor product grids and certain sets of
bivariate polynomials the Vandermonde determinant can be factored into a
product of univariate Vandermonde determinants. This result generalizes the
conjecture [Lemma 1, L. Bos et al. (2009), Dolomites Research Notes on
Approximation, 2:1-15]. As a special case, we apply the result to Padua and
Padua-like points.Comment: 10 pages, 1 figur
Improved initial approximation for errors-in-variables system identification
Errors-in-variables system identification can be posed and solved as a Hankel structured low-rank approximation problem. In this paper different estimates based on suboptimal low-rank approximations are considered. The estimates are shown to have almost the same efficiency and lead to the same minimum when supplied as an initial approximation to local optimization solver of the structured low-rank approximation problem. In this paper it is shown that increasing Hankel matrix window length improves suboptimal estimates for autonomous systems and does not improve them for systems with inputs
Approximate matrix and tensor diagonalization by unitary transformations: convergence of Jacobi-type algorithms
We propose a gradient-based Jacobi algorithm for a class of maximization
problems on the unitary group, with a focus on approximate diagonalization of
complex matrices and tensors by unitary transformations. We provide weak
convergence results, and prove local linear convergence of this algorithm.The
convergence results also apply to the case of real-valued tensors
Hyperspectral Super-Resolution with Coupled Tucker Approximation: Recoverability and SVD-based algorithms
We propose a novel approach for hyperspectral super-resolution, that is based
on low-rank tensor approximation for a coupled low-rank multilinear (Tucker)
model. We show that the correct recovery holds for a wide range of multilinear
ranks. For coupled tensor approximation, we propose two SVD-based algorithms
that are simple and fast, but with a performance comparable to the
state-of-the-art methods. The approach is applicable to the case of unknown
spatial degradation and to the pansharpening problem.Comment: IEEE Transactions on Signal Processing, Institute of Electrical and
Electronics Engineers, in Pres
Spectral properties of kernel matrices in the flat limit
Kernel matrices are of central importance to many applied fields. In this
manuscript, we focus on spectral properties of kernel matrices in the so-called
"flat limit", which occurs when points are close together relative to the scale
of the kernel. We establish asymptotic expressions for the determinants of the
kernel matrices, which we then leverage to obtain asymptotic expressions for
the main terms of the eigenvalues. Analyticity of the eigenprojectors yields
expressions for limiting eigenvectors, which are strongly tied to discrete
orthogonal polynomials. Both smooth and finitely smooth kernels are covered,
with stronger results available in the finite smoothness case.Comment: 40 pages, 8 page
Hankel low-rank matrix completion: performance of the nuclear norm relaxation
Accepted version.International audienceThe completion of matrices with missing values under the rank constraint is a non-convex optimization problem. A popular convex relaxation is based on minimization of the nuclear norm (sum of singular values) of the matrix. For this relaxation, an important question is whether the two optimization problems lead to the same solution. This question was addressed in the literature mostly in the case of random positions of missing elements and random known elements. In this contribution, we analyze the case of structured matrices with a fixed pattern of missing values, namely, the case of Hankel matrix completion. We extend existing results on completion of rank-one real Hankel matrices to completion of rank-r complex Hankel matrices.La complétion de données manquantes dans des matrices structurées sous contrainte de rang est un problème d'optimisation non convexe. Une relaxation convexe a été récemment proposée et est basée sur la minimisation de la norme nucléaire (somme des valeurs singulières). Il reste à prouver que ces deux problèmes d'optimisation conduisent bien à la même solution. Dans cette contribution, nous étendons les résultats existants pour des matrices Hankel réelles particulières à des matrices Hankel générales complexes
- …