520 research outputs found
RecExplainer: Aligning Large Language Models for Recommendation Model Interpretability
Recommender systems are widely used in various online services, with
embedding-based models being particularly popular due to their expressiveness
in representing complex signals. However, these models often lack
interpretability, making them less reliable and transparent for both users and
developers. With the emergence of large language models (LLMs), we find that
their capabilities in language expression, knowledge-aware reasoning, and
instruction following are exceptionally powerful. Based on this, we propose a
new model interpretation approach for recommender systems, by using LLMs as
surrogate models and learn to mimic and comprehend target recommender models.
Specifically, we introduce three alignment methods: behavior alignment,
intention alignment, and hybrid alignment. Behavior alignment operates in the
language space, representing user preferences and item information as text to
learn the recommendation model's behavior; intention alignment works in the
latent space of the recommendation model, using user and item representations
to understand the model's behavior; hybrid alignment combines both language and
latent spaces for alignment training. To demonstrate the effectiveness of our
methods, we conduct evaluation from two perspectives: alignment effect, and
explanation generation ability on three public datasets. Experimental results
indicate that our approach effectively enables LLMs to comprehend the patterns
of recommendation models and generate highly credible recommendation
explanations.Comment: 12 pages, 8 figures, 4 table
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by
leveraging extensive user behavior data. Despite their ability to act as
lightweight domain experts, they struggle to perform versatile tasks such as
providing explanations and engaging in conversations. On the other hand, large
language models (LLMs) represent a significant step towards artificial general
intelligence, showcasing remarkable capabilities in instruction comprehension,
commonsense reasoning, and human interaction. However, LLMs lack the knowledge
of domain-specific item catalogs and behavioral patterns, particularly in areas
that diverge from general world knowledge, such as online e-commerce.
Finetuning LLMs for each domain is neither economic nor efficient.
In this paper, we bridge the gap between recommender models and LLMs,
combining their respective strengths to create a versatile and interactive
recommender system. We introduce an efficient framework called InteRecAgent,
which employs LLMs as the brain and recommender models as tools. We first
outline a minimal set of essential tools required to transform LLMs into
InteRecAgent. We then propose an efficient workflow within InteRecAgent for
task execution, incorporating key components such as a memory bus, dynamic
demonstration-augmented task planning, and reflection. InteRecAgent enables
traditional recommender systems, such as those ID-based matrix factorization
models, to become interactive systems with a natural language interface through
the integration of LLMs. Experimental results on several public datasets show
that InteRecAgent achieves satisfying performance as a conversational
recommender system, outperforming general-purpose LLMs.Comment: 16 pages, 15 figures, 4 table
RecAI: Leveraging Large Language Models for Next-Generation Recommender Systems
This paper introduces RecAI, a practical toolkit designed to augment or even
revolutionize recommender systems with the advanced capabilities of Large
Language Models (LLMs). RecAI provides a suite of tools, including Recommender
AI Agent, Recommendation-oriented Language Models, Knowledge Plugin,
RecExplainer, and Evaluator, to facilitate the integration of LLMs into
recommender systems from multifaceted perspectives. The new generation of
recommender systems, empowered by LLMs, are expected to be more versatile,
explainable, conversational, and controllable, paving the way for more
intelligent and user-centric recommendation experiences. We hope the
open-source of RecAI can help accelerate evolution of new advanced recommender
systems. The source code of RecAI is available at
\url{https://github.com/microsoft/RecAI}.Comment: 4 pages. Webconf 2024 demo trac
Cooperative Retriever and Ranker in Deep Recommenders
Deep recommender systems (DRS) are intensively applied in modern web
services. To deal with the massive web contents, DRS employs a two-stage
workflow: retrieval and ranking, to generate its recommendation results. The
retriever aims to select a small set of relevant candidates from the entire
items with high efficiency; while the ranker, usually more precise but
time-consuming, is supposed to further refine the best items from the retrieved
candidates. Traditionally, the two components are trained either independently
or within a simple cascading pipeline, which is prone to poor collaboration
effect. Though some latest works suggested to train retriever and ranker
jointly, there still exist many severe limitations: item distribution shift
between training and inference, false negative, and misalignment of ranking
order. As such, it remains to explore effective collaborations between
retriever and ranker.Comment: 12pages, 4 figures, WWW'2
Knowledge Plugins: Enhancing Large Language Models for Domain-Specific Recommendations
The significant progress of large language models (LLMs) provides a promising
opportunity to build human-like systems for various practical applications.
However, when applied to specific task domains, an LLM pre-trained on a
general-purpose corpus may exhibit a deficit or inadequacy in two types of
domain-specific knowledge. One is a comprehensive set of domain data that is
typically large-scale and continuously evolving. The other is specific working
patterns of this domain reflected in the data. The absence or inadequacy of
such knowledge impacts the performance of the LLM. In this paper, we propose a
general paradigm that augments LLMs with DOmain-specific KnowledgE to enhance
their performance on practical applications, namely DOKE. This paradigm relies
on a domain knowledge extractor, working in three steps: 1) preparing effective
knowledge for the task; 2) selecting the knowledge for each specific sample;
and 3) expressing the knowledge in an LLM-understandable way. Then, the
extracted knowledge is incorporated through prompts, without any computational
cost of model fine-tuning. We instantiate the general paradigm on a widespread
application, i.e. recommender systems, where critical item attributes and
collaborative filtering signals are incorporated. Experimental results
demonstrate that DOKE can substantially improve the performance of LLMs in
specific domains
HULL SHAPE OPTIMIZATION OF SMALL UNDERWATER VEHICLE BASED ON KRIGING-BASED RESPONSE SURFACE METHOD AND MULTI-OBJECTIVE OPTIMIZATION ALGORITHM
Small underwater vehicles have unique advantages in ocean exploration. The resistance and volume of a vehicle are key factors affecting its operation time underwater. This paper aims to develop an effective method to obtain the optimal hull shape of a small underwater vehicle using Kriging-based response surface method (RSM) and multi-objective optimization algorithm. Firstly, the hydrodynamic performance of a small underwater vehicle is numerically investigated using computational fluid dynamics (CFD) method and the value range of related design variables is determined. The mesh convergence is verified to ensure the accuracy of the calculation results. Then, by means of the Latin hypercube sampling (LHS) design of simulation, the Kriging-based RSM model is developed according to the relation between each design variable of the vehicle and the output parameters applied to the vehicle. Based on the Kriging-based RSM model, the optimal hull shape of the vehicle is determined by using Screening and MOGA. As results, the vehicle resistance reduces and volume increases obviously
The Higgs boson inclusive decay channels and up to four-loop level
The principle of maximum conformality (PMC) has been suggested to eliminate
the renormalization scheme and renormalization scale uncertainties, which are
unavoidable for the conventional scale setting and are usually important errors
for theoretical estimations. In this paper, by applying PMC scale setting, we
analyze two important inclusive Standard Model Higgs decay channels,
and , up to four-loop and three-loop
levels accordingly. After PMC scale setting, it is found that the conventional
scale uncertainty for these two channels can be eliminated to a high degree.
There is small residual initial scale dependence for the Higgs decay widths due
to unknown higher-order -terms. Up to four-loop level, we obtain
MeV and up to
three-loop level, we obtain MeV,
where the first error is caused by varying GeV and the second
error for is caused by varying the -running
mass GeV. Taking as an example, we
present a comparison of three BLM-based scale setting approaches, e.g. the
PMC-I approach based on the PMC-BLM correspondence, the -scheme and
the seBLM approach, all of which are designed to provide effective ways to
identify non-conformal -series at each perturbative order. At
four-loop level, all those approaches lead to good pQCD convergence, they have
almost the same pQCD series, and their predictions are almost independent on
the initial renormalization scale. In this sense, those approaches are
equivalent to each other.Comment: 14 pages, 7 figures. References updated and discussions improved. To
be published in Eur.Phys.J.
- …