In this work, we leverage pre-trained Large Language Models (LLMs) to enhance
time-series forecasting. Mirroring the growing interest in unifying models for
Natural Language Processing and Computer Vision, we envision creating an
analogous model for long-term time-series forecasting. Due to limited
large-scale time-series data for building robust foundation models, our
approach LLM4TS focuses on leveraging the strengths of pre-trained LLMs. By
combining time-series patching with temporal encoding, we have enhanced the
capability of LLMs to handle time-series data effectively. Inspired by the
supervised fine-tuning in chatbot domains, we prioritize a two-stage
fine-tuning process: first conducting supervised fine-tuning to orient the LLM
towards time-series data, followed by task-specific downstream fine-tuning.
Furthermore, to unlock the flexibility of pre-trained LLMs without extensive
parameter adjustments, we adopt several Parameter-Efficient Fine-Tuning (PEFT)
techniques. Drawing on these innovations, LLM4TS has yielded state-of-the-art
results in long-term forecasting. Our model has also shown exceptional
capabilities as both a robust representation learner and an effective few-shot
learner, thanks to the knowledge transferred from the pre-trained LLM