11,763 research outputs found
Recommended from our members
Tensor Analysis and the Dynamics of Motor Cortex
Neural data often span multiple indices, such as neuron, experimental condition, trial, and time, resulting in a tensor or multidimensional array. Standard approaches to neural data analysis often rely on matrix factorization techniques, such as principal component analysis or nonnegative matrix factorization. Any inherent tensor structure in the data is lost when flattened into a matrix. Here, we analyze datasets from primary motor cortex from the perspective of tensor analysis, and develop a theory for how tensor structure relates to certain computational properties of the underlying system. Applied to the motor cortex datasets, we reveal that neural activity is best described by condition-independent dynamics as opposed to condition-dependent relations to external movement variables. Motivated by this result, we pursue one further tensor-related analysis, and two further dynamical systems-related analyses. First, we show how tensor decompositions can be used to denoise neural signals. Second, we apply system identification to the cortex- to-muscle transformation to reveal the intermediate spinal dynamics. Third, we fit recurrent neural networks to muscle activations and show that the geometric properties observed in motor cortex are naturally recapitulated in the network model. Taken together, these results emphasize (on the data analysis side) the role of tensor structure in data and (on the theoretical side) the role of motor cortex as a dynamical system
Multi-dimensional data analytics and deep learning via tensor networks
With the booming of big data and multi-sensor technology, multi-dimensional data, known as tensors, has demonstrated promising capability in capturing multidimensional correlation via efficiently extracting the latent structures, and drawn considerable attention in multiple disciplines such as image processing, recommender system, data analytics, etc. In addition to the multi-dimensional nature of real data, artificially designed tensors, referred as layers in deep neural networks, have also been intensively investigated and achieved the state-of-the-art performance in imaging processing, speech processing, and natural language understanding.
However, algorithms related with multi-dimensional data are unfortunately expensive in computation and storage, thus limiting its application when the computational resources are limited. Although tensor factorization has been proposed to reduce the dimensionality and alleviate the computational cost, the trade-off among computation, storage, and performance has not been well studied.
To this end, we first investigate an efficient dimensionality reduction method using a novel Tensor Train (TT) factorization. In particular, we propose a Tensor Train Principal Component Analysis (TT-PCA) and a Tensor Train Neighborhood Preserving Embedding (TT-NPE) to project data onto a Tensor Train Subspace (TTS) and effectively extract the discriminative features from the data. Mathematical analysis and simulation demonstrate TT-PCA and TT-NPE achieve better trade-off among computation, storage, and performance than the bench-mark tensor-based dimensionality reduction approaches. We then extend the TT factorization into general Tensor Ring (TR) factorization and propose a tensor ring completion algorithm, which can utilize 10% randomly observed pixels to recover the gunshot video at an error rate of only 6.25%. Inspired by the novel trade-off between model complexity and data representation, we introduce a Tensor Ring Nets (TRN) to compress the deep neural networks significantly. Using the benchmark 28-layer WideResNet architectures, TRN is able to compress the neural network by 243× with only 2.3% degradation in Cifar10 image classification
Neural Networks Compression for Language Modeling
In this paper, we consider several compression techniques for the language
modeling problem based on recurrent neural networks (RNNs). It is known that
conventional RNNs, e.g, LSTM-based networks in language modeling, are
characterized with either high space complexity or substantial inference time.
This problem is especially crucial for mobile applications, in which the
constant interaction with the remote server is inappropriate. By using the Penn
Treebank (PTB) dataset we compare pruning, quantization, low-rank
factorization, tensor train decomposition for LSTM networks in terms of model
size and suitability for fast inference.Comment: Keywords: LSTM, RNN, language modeling, low-rank factorization,
pruning, quantization. Published by Springer in the LNCS series, 7th
International Conference on Pattern Recognition and Machine Intelligence,
201
Predicting Sparse Clients' Actions with CPOPT-Net in the Banking Environment
The digital revolution of the banking system with evolving European
regulations have pushed the major banking actors to innovate by a newly use of
their clients' digital information. Given highly sparse client activities, we
propose CPOPT-Net, an algorithm that combines the CP canonical tensor
decomposition, a multidimensional matrix decomposition that factorizes a tensor
as the sum of rank-one tensors, and neural networks. CPOPT-Net removes
efficiently sparse information with a gradient-based resolution while relying
on neural networks for time series predictions. Our experiments show that
CPOPT-Net is capable to perform accurate predictions of the clients' actions in
the context of personalized recommendation. CPOPT-Net is the first algorithm to
use non-linear conjugate gradient tensor resolution with neural networks to
propose predictions of financial activities on a public data set
- …