3 research outputs found

    Comparison of blind source separation methods in fast somatosensory-evoked potential detection

    Get PDF
    Blind source separation (BSS) is a promising method for extracting somatosensory-evoked potential (SEP). Although various BSS algorithms are available for SEP extraction, few studies have addressed the performance differences between them. In this study, we compared the performance of a number of typical BSS algorithms on SEP extraction from both computer simulations and clinical experiment. The algorithms we compared included second-order blind identification, estimation of signal parameters via rotation invariance technique, algorithm for multiple unknown signals extraction, joint approximate diagonalization of eigenmatrices, extended infomax, and fast independent component analysis. The performances of these BSS algorithms were determined by the correlation coefficients between the true and the extracted SEP signals. There were significant differences in the performances of the various BSS algorithms in a simulation study. In summary, second-order blind identification using six covariance matrix denoting SOBI6 was recommended as the most appropriate BSS method for fast SEP extraction from noisy backgrounds. Copyright © 2011 by the American Clinical Neurophysiology Society.postprin

    Tensor decompositions of higher-order correlations by nonlinear Hebbian learning

    Full text link
    Biological synaptic plasticity exhibits nonlinearities that are not accounted for by classic Hebbian learning rules. Here, we introduce a simple family of generalized nonlinear Hebbian learning rules. We study the computations implemented by their dynamics in the simple setting of a neuron receiving feedforward inputs. These nonlinear Hebbian rules allow a neuron to learn tensor decompositions of its higher-order input correlations. The particular input correlation decomposed and the form of the decomposition depend on the location of nonlinearities in the plasticity rule. For simple, biologically motivated parameters, the neuron learns eigenvectors of higher-order input correlation tensors. We prove that tensor eigenvectors are attractors and determine their basins of attraction. We calculate the volume of those basins, showing that the dominant eigenvector has the largest basin of attraction. We then study arbitrary learning rules and find that any learning rule that admits a finite Taylor expansion into the neural input and output also has stable equilibria at generalized eigenvectors of higher-order input correlation tensors. Nonlinearities in synaptic plasticity thus allow a neuron to encode higher-order input correlations in a simple fashion.https://proceedings.neurips.cc/paper/2021/hash/5e34a2b4c23f4de585fb09a7f546f527-Abstract.htm
    corecore