245 research outputs found
Kernel Multivariate Analysis Framework for Supervised Subspace Learning: A Tutorial on Linear and Kernel Multivariate Methods
Feature extraction and dimensionality reduction are important tasks in many
fields of science dealing with signal processing and analysis. The relevance of
these techniques is increasing as current sensory devices are developed with
ever higher resolution, and problems involving multimodal data sources become
more common. A plethora of feature extraction methods are available in the
literature collectively grouped under the field of Multivariate Analysis (MVA).
This paper provides a uniform treatment of several methods: Principal Component
Analysis (PCA), Partial Least Squares (PLS), Canonical Correlation Analysis
(CCA) and Orthonormalized PLS (OPLS), as well as their non-linear extensions
derived by means of the theory of reproducing kernel Hilbert spaces. We also
review their connections to other methods for classification and statistical
dependence estimation, and introduce some recent developments to deal with the
extreme cases of large-scale and low-sized problems. To illustrate the wide
applicability of these methods in both classification and regression problems,
we analyze their performance in a benchmark of publicly available data sets,
and pay special attention to specific real applications involving audio
processing for music genre prediction and hyperspectral satellite images for
Earth and climate monitoring
Sparse principal component analysis via axis-aligned random projections
We introduce a new method for sparse principal component analysis, based on
the aggregation of eigenvector information from carefully-selected axis-aligned
random projections of the sample covariance matrix. Unlike most alternative
approaches, our algorithm is non-iterative, so is not vulnerable to a bad
choice of initialisation. We provide theoretical guarantees under which our
principal subspace estimator can attain the minimax optimal rate of convergence
in polynomial time. In addition, our theory provides a more refined
understanding of the statistical and computational trade-off in the problem of
sparse principal component estimation, revealing a subtle interplay between the
effective sample size and the number of random projections that are required to
achieve the minimax optimal rate. Numerical studies provide further insight
into the procedure and confirm its highly competitive finite-sample
performance.The research of the first and third authors was supported by an Engineering and Physical Sciences Research Council (EPSRC) grant EP/N014588/1 for the centre for Mathematical and Statistical Analysis of Multimodal Clinical Imaging. The second and third authors were supported by EPSRC Fellowship EP/J017213/1 and EP/P031447/1, and grant RG81761 from the Leverhulme Trust
Robust Orthogonal Complement Principal Component Analysis
Recently, the robustification of principal component analysis has attracted
lots of attention from statisticians, engineers and computer scientists. In
this work we study the type of outliers that are not necessarily apparent in
the original observation space but can seriously affect the principal subspace
estimation. Based on a mathematical formulation of such transformed outliers, a
novel robust orthogonal complement principal component analysis (ROC-PCA) is
proposed. The framework combines the popular sparsity-enforcing and low rank
regularization techniques to deal with row-wise outliers as well as
element-wise outliers. A non-asymptotic oracle inequality guarantees the
accuracy and high breakdown performance of ROC-PCA in finite samples. To tackle
the computational challenges, an efficient algorithm is developed on the basis
of Stiefel manifold optimization and iterative thresholding. Furthermore, a
batch variant is proposed to significantly reduce the cost in ultra high
dimensions. The paper also points out a pitfall of a common practice of SVD
reduction in robust PCA. Experiments show the effectiveness and efficiency of
ROC-PCA in both synthetic and real data
Multi-Rank Sparse and Functional PCA: Manifold Optimization and Iterative Deflation Techniques
We consider the problem of estimating multiple principal components using the
recently-proposed Sparse and Functional Principal Components Analysis (SFPCA)
estimator. We first propose an extension of SFPCA which estimates several
principal components simultaneously using manifold optimization techniques to
enforce orthogonality constraints. While effective, this approach is
computationally burdensome so we also consider iterative deflation approaches
which take advantage of existing fast algorithms for rank-one SFPCA. We show
that alternative deflation schemes can more efficiently extract signal from the
data, in turn improving estimation of subsequent components. Finally, we
compare the performance of our manifold optimization and deflation techniques
in a scenario where orthogonality does not hold and find that they still lead
to significantly improved performance.Comment: To appear in IEEE CAMSAP 201
Advances in Spectral Learning with Applications to Text Analysis and Brain Imaging
Spectral learning algorithms are becoming increasingly popular in data-rich domains, driven in part by recent advances in large scale randomized SVD, and in spectral estimation of Hidden Markov Models. Extensions of these methods lead to statistical estimation algorithms which are not only fast, scalable, and useful on real data sets, but are also provably correct.
Following this line of research, we make two contributions. First, we
propose a set of spectral algorithms for text analysis and natural
language processing. In particular, we propose fast and scalable
spectral algorithms for learning word embeddings -- low dimensional
real vectors (called Eigenwords) that capture the “meaning” of words from their context. Second, we show how similar spectral methods can be applied to analyzing brain images.
State-of-the-art approaches to learning word embeddings are slow to
train or lack theoretical grounding; We propose three spectral
algorithms that overcome these limitations. All three algorithms
harness the multi-view nature of text data i.e. the left and right
context of each word, and share three characteristics:
1). They are fast to train and are scalable.
2). They have strong theoretical properties.
3). They can induce context-specific embeddings i.e. different embedding for “river bank” or “Bank of America”.
\end{enumerate}
They also have lower sample complexity and hence higher statistical
power for rare words. We provide theory which establishes
relationships between these algorithms and optimality criteria for the
estimates they provide. We also perform thorough qualitative and
quantitative evaluation of Eigenwords and demonstrate their superior performance over state-of-the-art approaches.
Next, we turn to the task of using spectral learning methods for brain imaging data.
Methods like Sparse Principal Component Analysis (SPCA), Non-negative Matrix Factorization (NMF) and Independent Component Analysis (ICA) have been used to obtain state-of-the-art accuracies in a variety of problems in machine learning. However, their usage in brain imaging, though increasing, is limited by the fact that they are used as out-of-the-box techniques and are seldom tailored to the domain specific constraints and knowledge pertaining to medical imaging, which leads to difficulties in interpretation of results.
In order to address the above shortcomings, we propose
Eigenanatomy (EANAT), a general framework for sparse matrix factorization. Its goal is to statistically learn the boundaries of
and connections between brain regions by weighing both the data and prior neuroanatomical knowledge.
Although EANAT incorporates some neuroanatomical prior knowledge in the form of connectedness and smoothness constraints, it can still be difficult for clinicians to interpret the results in specific domains where network-specific hypotheses exist. We thus extend EANAT and present a novel framework for prior-constrained sparse decomposition of matrices derived from brain imaging data, called Prior Based Eigenanatomy (p-Eigen). We formulate our solution in terms of a prior-constrained l1 penalized (sparse) principal component analysis. Experimental evaluation confirms that p-Eigen extracts biologically-relevant, patient-specific functional parcels and that it significantly aids classification of Mild Cognitive Impairment when compared to state-of-the-art competing approaches
- …