Large language models (LLMs) are increasingly pivotal in a wide range of
natural language processing tasks. Access to pre-trained models, courtesy of
the open-source community, has made it possible to adapt these models to
specific applications for enhanced performance. However, the substantial
resources required for training these models necessitate efficient solutions.
This paper introduces CoLLiE, an efficient library that facilitates
collaborative training of large language models using 3D parallelism,
parameter-efficient fine-tuning (PEFT) methods, and optimizers such as Lion,
Adan, Sophia, LOMO and AdaLomo. With its modular design and comprehensive
functionality, CoLLiE offers a balanced blend of efficiency, ease of use, and
customization. CoLLiE has proven superior training efficiency in comparison
with prevalent solutions in pre-training and fine-tuning scenarios.
Furthermore, we provide an empirical evaluation of the correlation between
model size and GPU memory consumption under different optimization methods, as
well as an analysis of the throughput. Lastly, we carry out a comprehensive
comparison of various optimizers and PEFT methods within the instruction-tuning
context. CoLLiE is available at https://github.com/OpenLMLab/collie.Comment: To appear at EMNLP 2023 Demo; Code is available at
https://github.com/OpenLMLab/colli