This paper delves into the pressing need in Parameter-Efficient Fine-Tuning
(PEFT) for Large Language Models (LLMs). While LLMs possess remarkable
capabilities, their extensive parameter requirements and associated
computational demands hinder their practicality and scalability for real-world
applications. Our position paper highlights current states and the necessity of
further studying into the topic, and recognizes significant challenges and open
issues that must be addressed to fully harness the powerful abilities of LLMs.
These challenges encompass novel efficient PEFT architectures, PEFT for
different learning settings, PEFT combined with model compression techniques,
and the exploration of PEFT for multi-modal LLMs. By presenting this position
paper, we aim to stimulate further research and foster discussions surrounding
more efficient and accessible PEFT for LLMs