16 research outputs found
Analysis of the Hardware Imprecisions for Scalable and Compact Photonic Tensorized Neural Networks
We simulated tensor-train decomposed neural networks realized by Mach-Zehnder interferometer-based scalable photonic neuromorphic devices. The simulation results demonstrate that under practical hardware imprecisions, the TT-decomposed neural networks can achieve >90% test accuracy with 33.6× fewer MZIs than conventional photonic neural network implementations
Enhancing Deep Learning Models through Tensorization: A Comprehensive Survey and Framework
The burgeoning growth of public domain data and the increasing complexity of
deep learning model architectures have underscored the need for more efficient
data representation and analysis techniques. This paper is motivated by the
work of (Helal, 2023) and aims to present a comprehensive overview of
tensorization. This transformative approach bridges the gap between the
inherently multidimensional nature of data and the simplified 2-dimensional
matrices commonly used in linear algebra-based machine learning algorithms.
This paper explores the steps involved in tensorization, multidimensional data
sources, various multiway analysis methods employed, and the benefits of these
approaches. A small example of Blind Source Separation (BSS) is presented
comparing 2-dimensional algorithms and a multiway algorithm in Python. Results
indicate that multiway analysis is more expressive. Contrary to the intuition
of the dimensionality curse, utilising multidimensional datasets in their
native form and applying multiway analysis methods grounded in multilinear
algebra reveal a profound capacity to capture intricate interrelationships
among various dimensions while, surprisingly, reducing the number of model
parameters and accelerating processing. A survey of the multi-away analysis
methods and integration with various Deep Neural Networks models is presented
using case studies in different application domains.Comment: 34 pages, 8 figures, 4 table
TT-NF: Tensor Train Neural Fields
Learning neural fields has been an active topic in deep learning research,
focusing, among other issues, on finding more compact and easy-to-fit
representations. In this paper, we introduce a novel low-rank representation
termed Tensor Train Neural Fields (TT-NF) for learning neural fields on dense
regular grids and efficient methods for sampling from them. Our representation
is a TT parameterization of the neural field, trained with backpropagation to
minimize a non-convex objective. We analyze the effect of low-rank compression
on the downstream task quality metrics in two settings. First, we demonstrate
the efficiency of our method in a sandbox task of tensor denoising, which
admits comparison with SVD-based schemes designed to minimize reconstruction
error. Furthermore, we apply the proposed approach to Neural Radiance Fields,
where the low-rank structure of the field corresponding to the best quality can
be discovered only through learning.Comment: Preprint, under revie