498 research outputs found
Multilinear Subspace Clustering
In this paper we present a new model and an algorithm for unsupervised
clustering of 2-D data such as images. We assume that the data comes from a
union of multilinear subspaces (UOMS) model, which is a specific structured
case of the much studied union of subspaces (UOS) model. For segmentation under
this model, we develop Multilinear Subspace Clustering (MSC) algorithm and
evaluate its performance on the YaleB and Olivietti image data sets. We show
that MSC is highly competitive with existing algorithms employing the UOS model
in terms of clustering performance while enjoying improvement in computational
complexity
Tensor Representation in High-Frequency Financial Data for Price Change Prediction
Nowadays, with the availability of massive amount of trade data collected,
the dynamics of the financial markets pose both a challenge and an opportunity
for high frequency traders. In order to take advantage of the rapid, subtle
movement of assets in High Frequency Trading (HFT), an automatic algorithm to
analyze and detect patterns of price change based on transaction records must
be available. The multichannel, time-series representation of financial data
naturally suggests tensor-based learning algorithms. In this work, we
investigate the effectiveness of two multilinear methods for the mid-price
prediction problem against other existing methods. The experiments in a large
scale dataset which contains more than 4 millions limit orders show that by
utilizing tensor representation, multilinear models outperform vector-based
approaches and other competing ones.Comment: accepted in SSCI 2017, typos fixe
Efficient illumination independent appearance-based face tracking
One of the major challenges that visual tracking algorithms face nowadays is being
able to cope with changes in the appearance of the target during tracking. Linear
subspace models have been extensively studied and are possibly the most popular
way of modelling target appearance. We introduce a linear subspace representation
in which the appearance of a face is represented by the addition of two approxi-
mately independent linear subspaces modelling facial expressions and illumination
respectively. This model is more compact than previous bilinear or multilinear ap-
proaches. The independence assumption notably simplifies system training. We only
require two image sequences. One facial expression is subject to all possible illumina-
tions in one sequence and the face adopts all facial expressions under one particular
illumination in the other. This simple model enables us to train the system with
no manual intervention. We also revisit the problem of efficiently fitting a linear
subspace-based model to a target image and introduce an additive procedure for
solving this problem. We prove that Matthews and Baker’s Inverse Compositional
Approach makes a smoothness assumption on the subspace basis that is equiva-
lent to Hager and Belhumeur’s, which worsens convergence. Our approach differs
from Hager and Belhumeur’s additive and Matthews and Baker’s compositional ap-
proaches in that we make no smoothness assumptions on the subspace basis. In the
experiments conducted we show that the model introduced accurately represents
the appearance variations caused by illumination changes and facial expressions.
We also verify experimentally that our fitting procedure is more accurate and has
better convergence rate than the other related approaches, albeit at the expense of
a slight increase in computational cost. Our approach can be used for tracking a
human face at standard video frame rates on an average personal computer
N-Dimensional Principal Component Analysis
In this paper, we first briefly introduce the multidimensional Principal Component Analysis (PCA) techniques, and then amend our previous N-dimensional PCA (ND-PCA) scheme by introducing multidirectional decomposition into ND-PCA implementation. For the case of high dimensionality, PCA technique is usually extended to an arbitrary n-dimensional space by the Higher-Order Singular Value Decomposition (HO-SVD) technique. Due to the size of tensor, HO-SVD implementation usually leads to a huge matrix along some direction of tensor, which is always beyond the capacity of an ordinary PC. The novelty of this paper is to amend our previous ND-PCA scheme to deal with this challenge and further prove that the revised ND-PCA scheme can provide a near optimal linear solution under the given error bound. To evaluate the numerical property of the revised ND-PCA scheme, experiments are performed on a set of 3D volume datasets
Tensor-based Subspace Factorization for StyleGAN
In this paper, we propose GAN a tensor-based method for modeling the
latent space of generative models. The objective is to identify semantic
directions in latent space. To this end, we propose to fit a multilinear tensor
model on a structured facial expression database, which is initially embedded
into latent space. We validate our approach on StyleGAN trained on FFHQ using
BU-3DFE as a structured facial expression database. We show how the parameters
of the multilinear tensor model can be approximated by Alternating Least
Squares. Further, we introduce a tacked style-separated tensor model, defined
as an ensemble of style-specific models to integrate our approach with the
extended latent space of StyleGAN. We show that taking the individual styles of
the extended latent space into account leads to higher model flexibility and
lower reconstruction error. Finally, we do several experiments comparing our
approach to former work on both GANs and multilinear models. Concretely, we
analyze the expression subspace and find that the expression trajectories meet
at an apathetic face that is consistent with earlier work. We also show that by
changing the pose of a person, the generated image from our approach is closer
to the ground truth than results from two competing approaches.Comment: Accepted for FG202
Tumor Classification Using High-Order Gene Expression Profiles Based on Multilinear ICA
Motivation. Independent Components Analysis (ICA) maximizes the statistical independence of the representational components of
a training gene expression profiles (GEP) ensemble, but it cannot
distinguish relations between the different factors, or different
modes, and it is not available to high-order GEP Data Mining. In
order to generalize ICA, we introduce Multilinear-ICA and apply it to
tumor classification using high order GEP. Firstly, we introduce the
basis conceptions and operations of tensor and recommend Support
Vector Machine (SVM) classifier and Multilinear-ICA. Secondly,
the higher score genes of original high order GEP are selected by
using t-statistics and tabulate tensors. Thirdly, the tensors are
performed by Multilinear-ICA. Finally, the SVM is used to classify
the tumor subtypes. Results. To show the validity of the proposed method, we apply it
to tumor classification using high order GEP. Though we only use
three datasets, the experimental results show that the method is
effective and feasible. Through this survey, we hope to gain some
insight into the problem of high order GEP tumor classification, in
aid of further developing more effective tumor classification algorithms
Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis
The widespread use of multi-sensor technology and the emergence of big
datasets has highlighted the limitations of standard flat-view matrix models
and the necessity to move towards more versatile data analysis tools. We show
that higher-order tensors (i.e., multiway arrays) enable such a fundamental
paradigm shift towards models that are essentially polynomial and whose
uniqueness, unlike the matrix methods, is guaranteed under verymild and natural
conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical
backbone, data analysis techniques using tensor decompositions are shown to
have great flexibility in the choice of constraints that match data properties,
and to find more general latent components in the data than matrix-based
methods. A comprehensive introduction to tensor decompositions is provided from
a signal processing perspective, starting from the algebraic foundations, via
basic Canonical Polyadic and Tucker models, through to advanced cause-effect
and multi-view data analysis schemes. We show that tensor decompositions enable
natural generalizations of some commonly used signal processing paradigms, such
as canonical correlation and subspace techniques, signal separation, linear
regression, feature extraction and classification. We also cover computational
aspects, and point out how ideas from compressed sensing and scientific
computing may be used for addressing the otherwise unmanageable storage and
manipulation problems associated with big datasets. The concepts are supported
by illustrative real world case studies illuminating the benefits of the tensor
framework, as efficient and promising tools for modern signal processing, data
analysis and machine learning applications; these benefits also extend to
vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker
decomposition, HOSVD, tensor networks, Tensor Train
- …