5 research outputs found

    Enhancing Deep Learning Models through Tensorization: A Comprehensive Survey and Framework

    Full text link
    The burgeoning growth of public domain data and the increasing complexity of deep learning model architectures have underscored the need for more efficient data representation and analysis techniques. This paper is motivated by the work of (Helal, 2023) and aims to present a comprehensive overview of tensorization. This transformative approach bridges the gap between the inherently multidimensional nature of data and the simplified 2-dimensional matrices commonly used in linear algebra-based machine learning algorithms. This paper explores the steps involved in tensorization, multidimensional data sources, various multiway analysis methods employed, and the benefits of these approaches. A small example of Blind Source Separation (BSS) is presented comparing 2-dimensional algorithms and a multiway algorithm in Python. Results indicate that multiway analysis is more expressive. Contrary to the intuition of the dimensionality curse, utilising multidimensional datasets in their native form and applying multiway analysis methods grounded in multilinear algebra reveal a profound capacity to capture intricate interrelationships among various dimensions while, surprisingly, reducing the number of model parameters and accelerating processing. A survey of the multi-away analysis methods and integration with various Deep Neural Networks models is presented using case studies in different application domains.Comment: 34 pages, 8 figures, 4 table

    High Dimensional Factor Models with an Application to Mutual Fund Characteristics

    Get PDF
    This paper considers extensions of 2-dimensional factor models to higher-dimension data that can be represented as tensors. I describe decompositions of tensors that generalize the standard matrix singular value decomposition and principal component analysis to higher dimensions. I estimate the model using a 3-dimensional data set consisting of 25 characteristics of 1,342 mutual funds observed over 34 quarters. The tensor factor model reduces the data dimensionality by 97% while capturing 93% of the variation of the data. I relate higher-dimensional tensor models to standard 2-dimensional models and show that the components of the model have clear economic interpretations

    Tensor decompositions in deep learning

    No full text
    The paper surveys the topic of tensor decompositions in modern machine learning applications. It focuses on three active research topics of significant relevance for the community. After a brief review of consolidated works on multi-way data analysis, we consider the use of tensor decompositions in compressing the parameter space of deep learning models. Lastly, we discuss how tensor methods can be leveraged to yield richer adaptive representations of complex data, including structured information. The paper concludes with a discussion on interesting open research challenges

    Tensor Decompositions in Deep Learning

    No full text
    Tensor Decompositions is a subdomain of multilinear algebra concerned with dimensionality reduction and analysis of multi-dimensional arrays (tensors). The field has numerous applications in physics, chemistry, life sciences, and recently, machine learning, computer vision, and graphics. Despite the maturity of the field, much progress happened in the last years due to affordable parallel compute, driving empirical research. Deep Learning is a young subdomain of machine learning concerned with fitting deep, non-linear parametric models in a non-convex optimization setting with abundant data. The tipping point of interest in deep learning happened when a neural network (AlexNet) set a record-high score on a popular image classification benchmark (ImageNet), thus promising to solve long-standing computer vision problems. Over the past years, most breakthroughs in deep learning happened by finding smarter ways to increase model size and complexity. However, the need to deploy deep models on edge devices, such as for computational photography on mobile phones, has set a new direction for finding lean models. On the other hand, many high-potential deep learning techniques, such as Neural Radiance Fields (NeRF), or vision transformers, leave a huge margin for improvement upon inception. In this thesis, we investigate the use of tensor decompositions in the context of modern deep learning techniques. We aim to improve various types of efficiency: memory footprint and runtime performance, measured in parameters and floating-point operations (FLOPs), respectively. We begin by exploring neural network layer compression schemes and propose a tensorized representation with a basis tensor shared among layers and per-layer coefficients. Subsequently, we study the manifold of Tensor Train (TT) of fixed rank in the context of parameterizing layers of Generative Adversarial Networks (GANs) and demonstrate the ability to compress networks while maintaining the stability of training. Finally, we utilize TT-parameterization to learn compressed NeRFs and devise sampling schemes with support for automatic differentiation to facilitate training. Unlike most previous works on tensor decompositions, we treat decompositions as models in the deep learning sense and update their parameters through backpropagation and optimization. Like prior art, tensorized formats admit to certain algebraic operations, making them an appealing entity at the intersection of two prominent research directions
    corecore