46 research outputs found

    RecExplainer: Aligning Large Language Models for Recommendation Model Interpretability

    Full text link
    Recommender systems are widely used in various online services, with embedding-based models being particularly popular due to their expressiveness in representing complex signals. However, these models often lack interpretability, making them less reliable and transparent for both users and developers. With the emergence of large language models (LLMs), we find that their capabilities in language expression, knowledge-aware reasoning, and instruction following are exceptionally powerful. Based on this, we propose a new model interpretation approach for recommender systems, by using LLMs as surrogate models and learn to mimic and comprehend target recommender models. Specifically, we introduce three alignment methods: behavior alignment, intention alignment, and hybrid alignment. Behavior alignment operates in the language space, representing user preferences and item information as text to learn the recommendation model's behavior; intention alignment works in the latent space of the recommendation model, using user and item representations to understand the model's behavior; hybrid alignment combines both language and latent spaces for alignment training. To demonstrate the effectiveness of our methods, we conduct evaluation from two perspectives: alignment effect, and explanation generation ability on three public datasets. Experimental results indicate that our approach effectively enables LLMs to comprehend the patterns of recommendation models and generate highly credible recommendation explanations.Comment: 12 pages, 8 figures, 4 table

    Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations

    Full text link
    Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called InteRecAgent, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as a memory bus, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs.Comment: 16 pages, 15 figures, 4 table

    The mechanisms and factors that induce trained immunity in arthropods and mollusks

    Get PDF
    Besides dividing the organism’s immune system into adaptive and innate immunity, it has long been thought that only adaptive immunity can establish immune memory. However, many studies have shown that innate immunity can also build immunological memory through epigenetic reprogramming and modifications to resist pathogens’ reinfection, known as trained immunity. This paper reviews the role of mitochondrial metabolism and epigenetic modifications and describes the molecular foundation in the trained immunity of arthropods and mollusks. Mitochondrial metabolism and epigenetic modifications complement each other and play a key role in trained immunity

    Content categorization for memory retrieval: A method for evaluating design performance

    No full text
    Designers search for memories and retrieve appropriate mental information during design brainstorming. The specific contents of retrieved memories can serve as stimuli for new ideas, or act as barriers to innovation. These contents can be divided into different categories, which are reflected in designers’ creativities, and derived from individual lives and design experiences. Appropriate categorization of retrieved memory exemplars remains a fundamental research issue. This study tentatively divided retrieved memory exemplars into eight categories from brainstorming on the topic of library desk and chair design. A verification questionnaire was performed and validated the accuracy of categorization. The categorization result could be applied to design education in terms of understanding students’ design performances and capabilities

    Regulators help new immigrants settle down?

    No full text
    corecore