365 research outputs found

    Representation and Characterization of Non-Stationary Processes by Dilation Operators and Induced Shape Space Manifolds

    Full text link
    We have introduce a new vision of stochastic processes through the geometry induced by the dilation. The dilation matrices of a given processes are obtained by a composition of rotations matrices, contain the measure information in a condensed way. Particularly interesting is the fact that the obtention of dilation matrices is regardless of the stationarity of the underlying process. When the process is stationary, it coincides with the Naimark Dilation and only one rotation matrix is computed, when the process is non-stationary, a set of rotation matrices are computed. In particular, the periodicity of the correlation function that may appear in some classes of signal is transmitted to the set of dilation matrices. These rotation matrices, which can be arbitrarily close to each other depending on the sampling or the rescaling of the signal are seen as a distinctive feature of the signal. In order to study this sequence of matrices, and guided by the possibility to rescale the signal, the correct geometrical framework to use with the dilation's theoretic results is the space of curves on manifolds, that is the set of all curve that lies on a base manifold. To give a complete sight about the space of curve, a metric and the derived geodesic equation are provided. The general results are adapted to the more specific case where the base manifold is the Lie group of rotation matrices. The notion of the shape of a curve can be formalized as the set of equivalence classes of curves given by the quotient space of the space of curves and the increasing diffeomorphisms. The metric in the space of curve naturally extent to the space of shapes and enable comparison between shapes.Comment: 19 pages, draft pape

    Sliced-Wasserstein on Symmetric Positive Definite Matrices for M/EEG Signals

    Full text link
    When dealing with electro or magnetoencephalography records, many supervised prediction tasks are solved by working with covariance matrices to summarize the signals. Learning with these matrices requires using Riemanian geometry to account for their structure. In this paper, we propose a new method to deal with distributions of covariance matrices and demonstrate its computational efficiency on M/EEG multivariate time series. More specifically, we define a Sliced-Wasserstein distance between measures of symmetric positive definite matrices that comes with strong theoretical guarantees. Then, we take advantage of its properties and kernel methods to apply this distance to brain-age prediction from MEG data and compare it to state-of-the-art algorithms based on Riemannian geometry. Finally, we show that it is an efficient surrogate to the Wasserstein distance in domain adaptation for Brain Computer Interface applications

    Graph Neural Networks on SPD Manifolds for Motor Imagery Classification: A Perspective from the Time-Frequency Analysis

    Full text link
    Motor imagery (MI) classification is one of the most widely-concern research topics in Electroencephalography (EEG)-based brain-computer interfaces (BCIs) with extensive industry value. The MI-EEG classifiers' tendency has changed fundamentally over the past twenty years, while classifiers' performance is gradually increasing. In particular, owing to the need for characterizing signals' non-Euclidean inherence, the first geometric deep learning (GDL) framework, Tensor-CSPNet, has recently emerged in the BCI study. In essence, Tensor-CSPNet is a deep learning-based classifier on the second-order statistics of EEGs. In contrast to the first-order statistics, using these second-order statistics is the classical treatment of EEG signals, and the discriminative information contained in these second-order statistics is adequate for MI-EEG classification. In this study, we present another GDL classifier for MI-EEG classification called Graph-CSPNet, using graph-based techniques to simultaneously characterize the EEG signals in both the time and frequency domains. It is realized from the perspective of the time-frequency analysis that profoundly influences signal processing and BCI studies. Contrary to Tensor-CSPNet, the architecture of Graph-CSPNet is further simplified with more flexibility to cope with variable time-frequency resolution for signal segmentation to capture the localized fluctuations. In the experiments, Graph-CSPNet is evaluated on subject-specific scenarios from two well-used MI-EEG datasets and produces near-optimal classification accuracies.Comment: 16 pages, 5 figures, 9 Tables; This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Model Transport: Towards Scalable Transfer Learning on Manifolds

    Get PDF
    We consider the intersection of two research fields: transfer learning and statistics on manifolds. In particular, we consider, for manifold-valued data, transfer learning of tangent-space models such as Gaussians distributions, PCA, regression, or classifiers. Though one would hope to simply use ordinary Rn-transfer learning ideas, the manifold structure prevents it. We overcome this by basing our method on inner-product-preserving parallel transport, a well-known tool widely used in other problems of statistics on manifolds in computer vision. At first, this straightforward idea seems to suffer from an obvious shortcoming: Transporting large datasets is prohibitively expensive, hindering scalability. Fortunately, with our approach, we never transport data. Rather, we show how the statistical models themselves can be transported, and prove that for the tangent-space models above, the transport “commutes” with learning. Consequently, our compact framework, applicable to a large class of manifolds, is not restricted by the size of either the training or test sets. We demonstrate the approach by transferring PCA and logistic-regression models of real-world data involving 3D shapes and image descriptors

    A Survey of Geometric Optimization for Deep Learning: From Euclidean Space to Riemannian Manifold

    Full text link
    Although Deep Learning (DL) has achieved success in complex Artificial Intelligence (AI) tasks, it suffers from various notorious problems (e.g., feature redundancy, and vanishing or exploding gradients), since updating parameters in Euclidean space cannot fully exploit the geometric structure of the solution space. As a promising alternative solution, Riemannian-based DL uses geometric optimization to update parameters on Riemannian manifolds and can leverage the underlying geometric information. Accordingly, this article presents a comprehensive survey of applying geometric optimization in DL. At first, this article introduces the basic procedure of the geometric optimization, including various geometric optimizers and some concepts of Riemannian manifold. Subsequently, this article investigates the application of geometric optimization in different DL networks in various AI tasks, e.g., convolution neural network, recurrent neural network, transfer learning, and optimal transport. Additionally, typical public toolboxes that implement optimization on manifold are also discussed. Finally, this article makes a performance comparison between different deep geometric optimization methods under image recognition scenarios.Comment: 41 page
    • …
    corecore