Sequential recommendation is to predict the next item of interest for a user,
based on her/his interaction history with previous items. In conventional
sequential recommenders, a common approach is to model item sequences using
discrete IDs, learning representations that encode sequential behaviors and
reflect user preferences. Inspired by recent success in empowering large
language models (LLMs) to understand and reason over diverse modality data
(e.g., image, audio, 3D points), a compelling research question arises: ``Can
LLMs understand and work with hidden representations from ID-based sequential
recommenders?''.To answer this, we propose a simple framework, RecInterpreter,
which examines the capacity of open-source LLMs to decipher the representation
space of sequential recommenders. Specifically, with the multimodal pairs (\ie
representations of interaction sequence and text narrations), RecInterpreter
first uses a lightweight adapter to map the representations into the token
embedding space of the LLM. Subsequently, it constructs a sequence-recovery
prompt that encourages the LLM to generate textual descriptions for items
within the interaction sequence. Taking a step further, we propose a
sequence-residual prompt instead, which guides the LLM in identifying the
residual item by contrasting the representations before and after integrating
this residual into the existing sequence. Empirical results showcase that our
RecInterpreter enhances the exemplar LLM, LLaMA, to understand hidden
representations from ID-based sequential recommenders, especially when guided
by our sequence-residual prompts. Furthermore, RecInterpreter enables LLaMA to
instantiate the oracle items generated by generative recommenders like
DreamRec, concreting the item a user would ideally like to interact with next.
Codes are available at https://github.com/YangZhengyi98/RecInterpreter