93 research outputs found
Low Rank Optimization for Efficient Deep Learning: Making A Balance between Compact Architecture and Fast Training
Deep neural networks have achieved great success in many data processing
applications. However, the high computational complexity and storage cost makes
deep learning hard to be used on resource-constrained devices, and it is not
environmental-friendly with much power cost. In this paper, we focus on
low-rank optimization for efficient deep learning techniques. In the space
domain, deep neural networks are compressed by low rank approximation of the
network parameters, which directly reduces the storage requirement with a
smaller number of network parameters. In the time domain, the network
parameters can be trained in a few subspaces, which enables efficient training
for fast convergence. The model compression in the spatial domain is summarized
into three categories as pre-train, pre-set, and compression-aware methods,
respectively. With a series of integrable techniques discussed, such as sparse
pruning, quantization, and entropy coding, we can ensemble them in an
integration framework with lower computational complexity and storage. Besides
of summary of recent technical advances, we have two findings for motivating
future works: one is that the effective rank outperforms other sparse measures
for network compression. The other is a spatial and temporal balance for
tensorized neural networks
Robust low-rank training via approximate orthonormal constraints
With the growth of model and data sizes, a broad effort has been made to
design pruning techniques that reduce the resource demand of deep learning
pipelines, while retaining model performance. In order to reduce both inference
and training costs, a prominent line of work uses low-rank matrix
factorizations to represent the network weights. Although able to retain
accuracy, we observe that low-rank methods tend to compromise model robustness
against adversarial perturbations. By modeling robustness in terms of the
condition number of the neural network, we argue that this loss of robustness
is due to the exploding singular values of the low-rank weight matrices. Thus,
we introduce a robust low-rank training algorithm that maintains the network's
weights on the low-rank matrix manifold while simultaneously enforcing
approximate orthonormal constraints. The resulting model reduces both training
and inference costs while ensuring well-conditioning and thus better
adversarial robustness, without compromising model accuracy. This is shown by
extensive numerical evidence and by our main approximation theorem that shows
the computed robust low-rank network well-approximates the ideal full model,
provided a highly performing low-rank sub-network exists
Dynamical low-rank training of neural networks
Neural networks have achieved tremendous success in a large variety of applications. However, their space and time computational demand can limit their usage in resource limited devices.
At the same time, overparametrization seems to be necessary in order to overcome the highly non-convex nature of the training optimization problem. An optimal trade-off is then to be found in order to reduce networks' dimension while mantaining high performance. Popular approaches in the current literature are based on pruning techniques that look for subnetworks able to mantain approximately the initial performance.
Nevertheless, these techniques often are not able to reduce the memory footprint of the training phase.
In this thesis we will present DLRT, a training algorithm that looks for "low-rank subnetworks" by using DLRA theory and techniques.
These subnetworks and their ranks are determined and adapted already during the training phase, allowing the overall
time and memory resources required by both training and evaluation phases to be reduced significantly.Neural networks have achieved tremendous success in a large variety of applications. However, their space and time computational demand can limit their usage in resource limited devices.
At the same time, overparametrization seems to be necessary in order to overcome the highly non-convex nature of the training optimization problem. An optimal trade-off is then to be found in order to reduce networks' dimension while mantaining high performance. Popular approaches in the current literature are based on pruning techniques that look for subnetworks able to mantain approximately the initial performance.
Nevertheless, these techniques often are not able to reduce the memory footprint of the training phase.
In this thesis we will present DLRT, a training algorithm that looks for "low-rank subnetworks" by using DLRA theory and techniques.
These subnetworks and their ranks are determined and adapted already during the training phase, allowing the overall
time and memory resources required by both training and evaluation phases to be reduced significantly
Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
Despite the dominance and effectiveness of scaling, resulting in large
networks with hundreds of billions of parameters, the necessity to train
overparametrized models remains poorly understood, and alternative approaches
do not necessarily make it cheaper to train high-performance models. In this
paper, we explore low-rank training techniques as an alternative approach to
training large neural networks. We introduce a novel method called ReLoRA,
which utilizes low-rank updates to train high-rank networks. We apply ReLoRA to
pre-training transformer language models with up to 350M parameters and
demonstrate comparable performance to regular neural network training.
Furthermore, we observe that the efficiency of ReLoRA increases with model
size, making it a promising approach for training multi-billion-parameter
networks efficiently. Our findings shed light on the potential of low-rank
training techniques and their implications for scaling laws
Rank-adaptive spectral pruning of convolutional layers during training
The computing cost and memory demand of deep learning pipelines have grown
fast in recent years and thus a variety of pruning techniques have been
developed to reduce model parameters. The majority of these techniques focus on
reducing inference costs by pruning the network after a pass of full training.
A smaller number of methods address the reduction of training costs, mostly
based on compressing the network via low-rank layer factorizations. Despite
their efficiency for linear layers, these methods fail to effectively handle
convolutional filters. In this work, we propose a low-parametric training
method that factorizes the convolutions into tensor Tucker format and
adaptively prunes the Tucker ranks of the convolutional kernel during training.
Leveraging fundamental results from geometric integration theory of
differential equations on tensor manifolds, we obtain a robust training
algorithm that provably approximates the full baseline performance and
guarantees loss descent. A variety of experiments against the full model and
alternative low-rank baselines are implemented, showing that the proposed
method drastically reduces the training costs, while achieving high
performance, comparable to or better than the full baseline, and consistently
outperforms competing low-rank approaches
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
- …