11 research outputs found
EELBERT: Tiny Models through Dynamic Embeddings
We introduce EELBERT, an approach for compression of transformer-based models
(e.g., BERT), with minimal impact on the accuracy of downstream tasks. This is
achieved by replacing the input embedding layer of the model with dynamic, i.e.
on-the-fly, embedding computations. Since the input embedding layer accounts
for a significant fraction of the model size, especially for the smaller BERT
variants, replacing this layer with an embedding computation function helps us
reduce the model size significantly. Empirical evaluation on the GLUE benchmark
shows that our BERT variants (EELBERT) suffer minimal regression compared to
the traditional BERT models. Through this approach, we are able to develop our
smallest model UNO-EELBERT, which achieves a GLUE score within 4% of fully
trained BERT-tiny, while being 15x smaller (1.2 MB) in size.Comment: EMNLP 2023, Industry Track 9 pages, 2 figures, 5 table