This work summarizes two strategies for completing time-series (TS) tasks
using today's language model (LLM): LLM-for-TS, design and train a fundamental
large model for TS data; TS-for-LLM, enable the pre-trained LLM to handle TS
data. Considering the insufficient data accumulation, limited resources, and
semantic context requirements, this work focuses on TS-for-LLM methods, where
we aim to activate LLM's ability for TS data by designing a TS embedding method
suitable for LLM. The proposed method is named TEST. It first tokenizes TS,
builds an encoder to embed them by instance-wise, feature-wise, and
text-prototype-aligned contrast, and then creates prompts to make LLM more open
to embeddings, and finally implements TS tasks. Experiments are carried out on
TS classification and forecasting tasks using 8 LLMs with different structures
and sizes. Although its results cannot significantly outperform the current
SOTA models customized for TS tasks, by treating LLM as the pattern machine, it
can endow LLM's ability to process TS data without compromising the language
ability. This paper is intended to serve as a foundational work that will
inspire further research.Comment: 10 pages, 6 figure