4,577 research outputs found
Kernel learning over the manifold of symmetric positive definite matrices for dimensionality reduction in a BCI application
In this paper, we propose a kernel for nonlinear dimensionality reduction over the manifold of Symmetric Positive Definite (SPD) matrices in a Motor Imagery (MI)-based Brain Computer Interface (BCI) application. The proposed kernel, which is based on Riemannian geometry, tries to preserve the topology of data points in the feature space. Topology preservation is the main challenge in nonlinear dimensionality reduction (NLDR). Our main idea is to decrease the non-Euclidean characteristics of the manifold by modifying the volume elements. We apply a conformal transform over data-dependent isometric mapping to reduce the negative eigen fraction to learn a data dependent kernel over the Riemannian manifolds. Multiple experiments were carried out using the proposed kernel for a dimensionality reduction of SPD matrices that describe the EEG signals of dataset IIa from BCI competition IV.
The experiments show that this kernel adapts to the input data and leads to promising results in comparison with the most popular manifold learning methods and the Common Spatial Pattern (CSP) technique as a reference algorithm in BCI competitions. The proposed kernel is strong, particularly in the cases where data points have a complex and nonlinear separable distribution
Sparse Coding on Symmetric Positive Definite Manifolds using Bregman Divergences
This paper introduces sparse coding and dictionary learning for Symmetric
Positive Definite (SPD) matrices, which are often used in machine learning,
computer vision and related areas. Unlike traditional sparse coding schemes
that work in vector spaces, in this paper we discuss how SPD matrices can be
described by sparse combination of dictionary atoms, where the atoms are also
SPD matrices. We propose to seek sparse coding by embedding the space of SPD
matrices into Hilbert spaces through two types of Bregman matrix divergences.
This not only leads to an efficient way of performing sparse coding, but also
an online and iterative scheme for dictionary learning. We apply the proposed
methods to several computer vision tasks where images are represented by region
covariance matrices. Our proposed algorithms outperform state-of-the-art
methods on a wide range of classification tasks, including face recognition,
action recognition, material classification and texture categorization
Log-Euclidean Bag of Words for Human Action Recognition
Representing videos by densely extracted local space-time features has
recently become a popular approach for analysing actions. In this paper, we
tackle the problem of categorising human actions by devising Bag of Words (BoW)
models based on covariance matrices of spatio-temporal features, with the
features formed from histograms of optical flow. Since covariance matrices form
a special type of Riemannian manifold, the space of Symmetric Positive Definite
(SPD) matrices, non-Euclidean geometry should be taken into account while
discriminating between covariance matrices. To this end, we propose to embed
SPD manifolds to Euclidean spaces via a diffeomorphism and extend the BoW
approach to its Riemannian version. The proposed BoW approach takes into
account the manifold geometry of SPD matrices during the generation of the
codebook and histograms. Experiments on challenging human action datasets show
that the proposed method obtains notable improvements in discrimination
accuracy, in comparison to several state-of-the-art methods
- …