896 research outputs found
Positive Definiteness and Semi-Definiteness of Even Order Symmetric Cauchy Tensors
Motivated by symmetric Cauchy matrices, we define symmetric Cauchy tensors
and their generating vectors in this paper. Hilbert tensors are symmetric
Cauchy tensors. An even order symmetric Cauchy tensor is positive semi-definite
if and only if its generating vector is positive. An even order symmetric
Cauchy tensor is positive definite if and only if its generating vector has
positive and mutually distinct entries. This extends Fiedler's result for
symmetric Cauchy matrices to symmetric Cauchy tensors. Then, it is proven that
the positive semi-definiteness character of an even order symmetric Cauchy
tensor can be equivalently checked by the monotone increasing property of a
homogeneous polynomial related to the Cauchy tensor. The homogeneous polynomial
is strictly monotone increasing in the nonnegative orthant of the Euclidean
space when the even order symmetric Cauchy tensor is positive definite.
Furthermore, we prove that the Hadamard product of two positive semi-definite
(positive definite respectively) symmetric Cauchy tensors is a positive
semi-definite (positive definite respectively) tensor, which can be generalized
to the Hadamard product of finitely many positive semi-definite (positive
definite respectively) symmetric Cauchy tensors. At last, bounds of the largest
H-eigenvalue of a positive semi-definite symmetric Cauchy tensor are given and
several spectral properties on Z-eigenvalues of odd order symmetric Cauchy
tensors are shown. Further questions on Cauchy tensors are raised
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Efficient resonance computations for Helmholtz problems based on a Dirichlet-to-Neumann map
We present an efficient procedure for computing resonances and resonant modes
of Helmholtz problems posed in exterior domains. The problem is formulated as a
nonlinear eigenvalue problem (NEP), where the nonlinearity arises from the use
of a Dirichlet-to-Neumann map, which accounts for modeling unbounded domains.
We consider a variational formulation and show that the spectrum consists of
isolated eigenvalues of finite multiplicity that only can accumulate at
infinity. The proposed method is based on a high order finite element
discretization combined with a specialization of the Tensor Infinite Arnoldi
method. Using Toeplitz matrices, we show how to specialize this method to our
specific structure. In particular we introduce a pole cancellation technique in
order to increase the radius of convergence for computation of eigenvalues that
lie close to the poles of the matrix-valued function. The solution scheme can
be applied to multiple resonators with a varying refractive index that is not
necessarily piecewise constant. We present two test cases to show stability,
performance and numerical accuracy of the method. In particular the use of a
high order finite element discretization together with TIAR results in an
efficient and reliable method to compute resonances
Centrosymmetric, Skew Centrosymmetric and Centrosymmetric Cauchy Tensors
Recently, Zhao and Yang introduced centrosymmetric tensors. In this paper, we
further introduce skew centrosymmetric tensors and centrosymmetric Cauchy
tensors, and discuss properties of these three classes of structured tensors.
Some sufficient and necessary conditions for a tensor to be centrosymmetric or
skew centrosymmetric are given. We show that, a general tensor can always be
expressed as the sum of a centrosymmetric tensor and a skew centrosymmetric
tensor. Some sufficient and necessary conditions for a Cauchy tensor to be
centrosymmetric or skew centrosymmetric are also given. Spectral properties on
H-eigenvalues and H-eigenvectors of centrosymmetric, skew centrosymmetric and
centrosymmetric Cauchy tensors are discussed. Some further questions on these
tensors are raised
Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis
The widespread use of multi-sensor technology and the emergence of big
datasets has highlighted the limitations of standard flat-view matrix models
and the necessity to move towards more versatile data analysis tools. We show
that higher-order tensors (i.e., multiway arrays) enable such a fundamental
paradigm shift towards models that are essentially polynomial and whose
uniqueness, unlike the matrix methods, is guaranteed under verymild and natural
conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical
backbone, data analysis techniques using tensor decompositions are shown to
have great flexibility in the choice of constraints that match data properties,
and to find more general latent components in the data than matrix-based
methods. A comprehensive introduction to tensor decompositions is provided from
a signal processing perspective, starting from the algebraic foundations, via
basic Canonical Polyadic and Tucker models, through to advanced cause-effect
and multi-view data analysis schemes. We show that tensor decompositions enable
natural generalizations of some commonly used signal processing paradigms, such
as canonical correlation and subspace techniques, signal separation, linear
regression, feature extraction and classification. We also cover computational
aspects, and point out how ideas from compressed sensing and scientific
computing may be used for addressing the otherwise unmanageable storage and
manipulation problems associated with big datasets. The concepts are supported
by illustrative real world case studies illuminating the benefits of the tensor
framework, as efficient and promising tools for modern signal processing, data
analysis and machine learning applications; these benefits also extend to
vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker
decomposition, HOSVD, tensor networks, Tensor Train
- …