179 research outputs found
Fast Kronecker Matrix-Matrix Multiplication on GPUs
Kronecker Matrix-Matrix Multiplication (Kron-Matmul) is the multiplication of
a matrix with the Kronecker Product of several smaller matrices. Kron-Matmul is
a core operation for many scientific and machine learning computations.
State-of-the-art Kron-Matmul implementations utilize existing tensor algebra
operations, such as matrix multiplication, transpose, and tensor matrix
multiplication. However, this design choice prevents several Kron-Matmul
specific optimizations, thus, leaving significant performance on the table. To
address this issue, we present FastKron, an efficient technique for Kron-Matmul
on single and multiple GPUs. FastKron is independent of linear algebra
operations enabling several new optimizations for Kron-Matmul. Thus, it
performs up to 40.7x and 7.85x faster than existing implementations on 1 and 16
GPUs respectively.Comment: Accepted at PPoPP 202
Exclusive Supermask Subnetwork Training for Continual Learning
Continual Learning (CL) methods focus on accumulating knowledge over time
while avoiding catastrophic forgetting. Recently, Wortsman et al. (2020)
proposed a CL method, SupSup, which uses a randomly initialized, fixed base
network (model) and finds a supermask for each new task that selectively keeps
or removes each weight to produce a subnetwork. They prevent forgetting as the
network weights are not being updated. Although there is no forgetting, the
performance of SupSup is sub-optimal because fixed weights restrict its
representational power. Furthermore, there is no accumulation or transfer of
knowledge inside the model when new tasks are learned. Hence, we propose
ExSSNeT (Exclusive Supermask SubNEtwork Training), that performs exclusive and
non-overlapping subnetwork weight training. This avoids conflicting updates to
the shared weights by subsequent tasks to improve performance while still
preventing forgetting. Furthermore, we propose a novel KNN-based Knowledge
Transfer (KKT) module that utilizes previously acquired knowledge to learn new
tasks better and faster. We demonstrate that ExSSNeT outperforms strong
previous methods on both NLP and Vision domains while preventing forgetting.
Moreover, ExSSNeT is particularly advantageous for sparse masks that activate
2-10% of the model parameters, resulting in an average improvement of 8.3% over
SupSup. Furthermore, ExSSNeT scales to a large number of tasks (100). Our code
is available at https://github.com/prateeky2806/exessnet.Comment: ACL Findings 2023 (17 pages, 7 figures
ComPEFT: Compression for Communicating Parameter Efficient Updates via Sparsification and Quantization
Parameter-efficient fine-tuning (PEFT) techniques make it possible to
efficiently adapt a language model to create "expert" models that specialize to
new tasks or domains. Recent techniques in model merging and compositional
generalization leverage these expert models by dynamically composing modules to
improve zero/few-shot generalization. Despite the efficiency of PEFT methods,
the size of expert models can make it onerous to retrieve expert models per
query over high-latency networks like the Internet or serve multiple experts on
a single GPU. To address these issues, we present ComPEFT, a novel method for
compressing fine-tuning residuals (task vectors) of PEFT based models. ComPEFT
employs sparsification and ternary quantization to reduce the size of the PEFT
module without performing any additional retraining while preserving or
enhancing model performance. In extensive evaluation across T5, T0, and
LLaMA-based models with 200M - 65B parameters, ComPEFT achieves compression
ratios of 8x - 50x. In particular, we show that ComPEFT improves with scale -
stronger models exhibit higher compressibility and better performance. For
example, we show that ComPEFT applied to LLaMA outperforms QLoRA by 4.16% on
MMLU with a storage size reduction of up to 26x. In addition, we show that the
compressed experts produced by ComPEFT maintain few-shot compositional
generalization capabilities, facilitate efficient communication and
computation, and exhibit enhanced performance when merged. Lastly, we provide
an analysis of different method components, compare it with other PEFT methods,
and test ComPEFT's efficacy for compressing the residual of full-finetuning.
Our code is available at https://github.com/prateeky2806/compeft.Comment: 25 Pages, 6 Figures, 16 Table
- …