We investigate various prompting strategies for enhancing personalized
recommendation performance with large language models (LLMs) through input
augmentation. Our proposed approach, termed LLM-Rec, encompasses four distinct
prompting strategies: (1) basic prompting, (2) recommendation-driven prompting,
(3) engagement-guided prompting, and (4) recommendation-driven +
engagement-guided prompting. Our empirical experiments show that incorporating
the augmented input text generated by LLM leads to improved recommendation
performance. Recommendation-driven and engagement-guided prompting strategies
are found to elicit LLM's understanding of global and local item
characteristics. This finding highlights the importance of leveraging diverse
prompts and input augmentation techniques to enhance the recommendation
capabilities with LLMs