164 research outputs found

    Canonical Polyadic Decomposition With Auxiliary Information for Brain–Computer Interface

    Get PDF
    Physiological signals are often organized in the form of multiple dimensions (e.g., channel, time, task, and 3-D voxel), so it is better to preserve original organization structure when processing. Unlike vector-based methods that destroy data structure, canonical polyadic decomposition (CPD) aims to process physiological signals in the form of multiway array, which considers relationships between dimensions and preserves structure information contained by the physiological signal. Nowadays, CPD is utilized as an unsupervised method for feature extraction in a classification problem. After that, a classifier, such as support vector machine, is required to classify those features. In this manner, classification task is achieved in two isolated steps. We proposed supervised CPD by directly incorporating auxiliary label information during decomposition, by which a classification task can be achieved without an extra step of classifier training. The proposed method merges the decomposition and classifier learning together, so it reduces procedure of classification task compared with that of respective decomposition and classification. In order to evaluate the performance of the proposed method, three different kinds of signals, synthetic signal, EEG signal, and MEG signal, were used. The results based on evaluations of synthetic and real signals demonstrated that the proposed method is effective and efficient

    Bayesian Robust Tensor Factorization for Incomplete Multiway Data

    Full text link
    We propose a generative model for robust tensor factorization in the presence of both missing data and outliers. The objective is to explicitly infer the underlying low-CP-rank tensor capturing the global information and a sparse tensor capturing the local information (also considered as outliers), thus providing the robust predictive distribution over missing entries. The low-CP-rank tensor is modeled by multilinear interactions between multiple latent factors on which the column sparsity is enforced by a hierarchical prior, while the sparse tensor is modeled by a hierarchical view of Student-tt distribution that associates an individual hyperparameter with each element independently. For model learning, we develop an efficient closed-form variational inference under a fully Bayesian treatment, which can effectively prevent the overfitting problem and scales linearly with data size. In contrast to existing related works, our method can perform model selection automatically and implicitly without need of tuning parameters. More specifically, it can discover the groundtruth of CP rank and automatically adapt the sparsity inducing priors to various types of outliers. In addition, the tradeoff between the low-rank approximation and the sparse representation can be optimized in the sense of maximum model evidence. The extensive experiments and comparisons with many state-of-the-art algorithms on both synthetic and real-world datasets demonstrate the superiorities of our method from several perspectives.Comment: in IEEE Transactions on Neural Networks and Learning Systems, 201

    Thoughts on Neurophysiological Signal Analysis and Classification

    Get PDF
    Neurophysiological signal is crucial intermediary, through which brain activity can be quantitatively measured and brain mechanisms are able to be revealed. In particular, those non-invasive neurophysiological signals, such as electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI), are welcome and frequently utilised in a variety of studies because those signals can be non-invasively recorded without harms to the human brain while they are conveying abundant information pertaining to brain activity. The recorded neurophysiological signals are analysed to mine meaningful information for the understanding of brain mechanisms or are classified to distinguish different patterns (e.g., different cognitive states, brain diseases versus healthy controls). To date, remarkable progress has been made in both the analysis and classification of neurophysiological signal, but scholars are not feeling complacent. Consistent effort ought to be paid to advance the research of analysis and classification based on neurophysiological signal. In this paper, I express my thoughts about promising future directions in neurophysiological signal analysis and classification based on the current developments and achievements. I will elucidate the thoughts after brief summaries of relevant backgrounds, achievements, and tendencies. According to my personal selection and preference, I mainly focus on brain connectivity, multidimensional array (tensor), multi-modality, multiple task classification, deep learning, big data, and naturalistic experiment. Hopefully, my thoughts could give a little help to inspire new ideas and contribute to the research of analysis and classification of neurophysiological signal in some way

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Multi-Kernel Capsule Network for Schizophrenia Identification

    Get PDF
    Schizophrenia seriously affects the quality of life. To date, both simple (e.g., linear discriminant analysis) and complex (e.g., deep neural network) machine learning methods have been utilized to identify schizophrenia based on functional connectivity features. The existing simple methods need two separate steps (i.e., feature extraction and classification) to achieve the identification, which disables simultaneous tuning for the best feature extraction and classifier training. The complex methods integrate two steps and can be simultaneously tuned to achieve optimal performance, but these methods require a much larger amount of data for model training. To overcome the aforementioned drawbacks, we proposed a multi-kernel capsule network (MKCapsnet), which was developed by considering the brain anatomical structure. Kernels were set to match with partition sizes of brain anatomical structure in order to capture interregional connectivities at the varying scales. With the inspiration of widely-used dropout strategy in deep learning, we developed capsule dropout in the capsule layer to prevent overfitting of the model. The comparison results showed that the proposed method outperformed the state-of-the-art methods. Besides, we compared performances using different parameters and illustrated the routing process to reveal characteristics of the proposed method. MKCapsnet is promising for schizophrenia identification. Our study first utilized capsule neural network for analyzing functional connectivity of magnetic resonance imaging (MRI) and proposed a novel multi-kernel capsule structure with consideration of brain anatomical parcellation, which could be a new way to reveal brain mechanisms. In addition, we provided useful information in the parameter setting, which is informative for further studies using a capsule network for other neurophysiological signal classification
    corecore