440,099 research outputs found

    Tensor decompositions of higher-order correlations by nonlinear Hebbian learning

    Full text link
    Biological synaptic plasticity exhibits nonlinearities that are not accounted for by classic Hebbian learning rules. Here, we introduce a simple family of generalized nonlinear Hebbian learning rules. We study the computations implemented by their dynamics in the simple setting of a neuron receiving feedforward inputs. These nonlinear Hebbian rules allow a neuron to learn tensor decompositions of its higher-order input correlations. The particular input correlation decomposed and the form of the decomposition depend on the location of nonlinearities in the plasticity rule. For simple, biologically motivated parameters, the neuron learns eigenvectors of higher-order input correlation tensors. We prove that tensor eigenvectors are attractors and determine their basins of attraction. We calculate the volume of those basins, showing that the dominant eigenvector has the largest basin of attraction. We then study arbitrary learning rules and find that any learning rule that admits a finite Taylor expansion into the neural input and output also has stable equilibria at generalized eigenvectors of higher-order input correlation tensors. Nonlinearities in synaptic plasticity thus allow a neuron to encode higher-order input correlations in a simple fashion.https://proceedings.neurips.cc/paper/2021/hash/5e34a2b4c23f4de585fb09a7f546f527-Abstract.htm

    Second-order Temporal Pooling for Action Recognition

    Full text link
    Deep learning models for video-based action recognition usually generate features for short clips (consisting of a few frames); such clip-level features are aggregated to video-level representations by computing statistics on these features. Typically zero-th (max) or the first-order (average) statistics are used. In this paper, we explore the benefits of using second-order statistics. Specifically, we propose a novel end-to-end learnable feature aggregation scheme, dubbed temporal correlation pooling that generates an action descriptor for a video sequence by capturing the similarities between the temporal evolution of clip-level CNN features computed across the video. Such a descriptor, while being computationally cheap, also naturally encodes the co-activations of multiple CNN features, thereby providing a richer characterization of actions than their first-order counterparts. We also propose higher-order extensions of this scheme by computing correlations after embedding the CNN features in a reproducing kernel Hilbert space. We provide experiments on benchmark datasets such as HMDB-51 and UCF-101, fine-grained datasets such as MPII Cooking activities and JHMDB, as well as the recent Kinetics-600. Our results demonstrate the advantages of higher-order pooling schemes that when combined with hand-crafted features (as is standard practice) achieves state-of-the-art accuracy.Comment: Accepted in the International Journal of Computer Vision (IJCV

    Multiresolution Tensor Learning for Efficient and Interpretable Spatial Analysis

    Get PDF
    Efficient and interpretable spatial analysis is crucial in many fields such as geology, sports, and climate science. Large-scale spatial data often contains complex higher-order correlations across features and locations. While tensor latent factor models can describe higher-order correlations, they are inherently computationally expensive to train. Furthermore, for spatial analysis, these models should not only be predictive but also be spatially coherent. However, latent factor models are sensitive to initialization and can yield inexplicable results. We develop a novel Multi-resolution Tensor Learning (MRTL) algorithm for efficiently learning interpretable spatial patterns. MRTL initializes the latent factors from an approximate full-rank tensor model for improved interpretability and progressively learns from a coarse resolution to the fine resolution for an enormous computation speedup. We also prove the theoretical convergence and computational complexity of MRTL. When applied to two real-world datasets, MRTL demonstrates 4 ~ 5 times speedup compared to a fixed resolution while yielding accurate and interpretable models

    Examination of The Big Five And Narrow Traits In Relation To Learner Self-Direction

    Get PDF
    Self-direction in learning is a major topic in the field of adult learning. There has been extensive coverage of the topic by theorists, researchers, and practitioners. However, there have been few studies which look at learner self-direction specifically as a personality trait. The present study addresses the relationship between learner self-direction and other personality traits of college students when the traits represented by the five-factor model of personality (Digman, 1990) are differentiated from narrow personality traits. Archival data were used from an undergraduate sample at a large Southeastern U.S. university (sample size = 2102). Correlation and multiple regression analyses were used in examining the unique individual relationship between Big Five and narrow personality traits and learner self-direction. Analysis of the data revealed five significant part correlations between specific traits and learner self-direction. The part correlations for Work Drive (.310) and Openness (.207) were significantly higher than all other part correlations. Neither Conscientiousness nor Agreeableness had significant part correlations despite having significant zero-order correlations with learner self-direction. Extraversion did not have a significant zero-order correlation with learner self-direction but the part correlation was significant. Results were discussed in terms of the predictive relationship between personality variables and learner self-direction. Study implications, some limitations, and possible directions for future research were noted
    • …
    corecore