2 research outputs found
Multi-Branch Tensor Network Structure for Tensor-Train Discriminant Analysis
Higher-order data with high dimensionality arise in a diverse set of
application areas such as computer vision, video analytics and medical imaging.
Tensors provide a natural tool for representing these types of data. Although
there has been a lot of work in the area of tensor decomposition and low-rank
tensor approximation, extensions to supervised learning, feature extraction and
classification are still limited. Moreover, most of the existing supervised
tensor learning approaches are based on the orthogonal Tucker model. However,
this model has some limitations for large tensors including high memory and
computational costs. In this paper, we introduce a supervised learning approach
for tensor classification based on the tensor-train model. In particular, we
introduce a multi-branch tensor network structure for efficient implementation
of tensor-train discriminant analysis (TTDA). The proposed approach takes
advantage of the flexibility of the tensor train structure to implement various
computationally efficient versions of TTDA. This approach is then evaluated on
image and video classification tasks with respect to computation time, storage
cost and classification accuracy and is compared to both vector and tensor
based discriminant analysis methods
Graph Regularized Tensor Train Decomposition
With the advances in data acquisition technology, tensor objects are
collected in a variety of applications including multimedia, medical and
hyperspectral imaging. As the dimensionality of tensor objects is usually very
high, dimensionality reduction is an important problem. Most of the current
tensor dimensionality reduction methods rely on finding low-rank linear
representations using different generative models. However, it is well-known
that high-dimensional data often reside in a low-dimensional manifold.
Therefore, it is important to find a compact representation, which uncovers the
low dimensional tensor structure while respecting the intrinsic geometry. In
this paper, we propose a graph regularized tensor train (GRTT) decomposition
that learns a low-rank tensor train model that preserves the local
relationships between tensor samples. The proposed method is formulated as a
nonconvex optimization problem on the Stiefel manifold and an efficient
algorithm is proposed to solve it. The proposed method is compared to existing
tensor based dimensionality reduction methods as well as tensor manifold
embedding methods for unsupervised learning applications