117 research outputs found
Causal Collaborative Filtering
Recommender systems are important and valuable tools for many personalized
services. Collaborative Filtering (CF) algorithms -- among others -- are
fundamental algorithms driving the underlying mechanism of personalized
recommendation. Many of the traditional CF algorithms are designed based on the
fundamental idea of mining or learning correlative patterns from data for
matching, including memory-based methods such as user/item-based CF as well as
learning-based methods such as matrix factorization and deep learning models.
However, advancing from correlative learning to causal learning is an important
problem, because causal/counterfactual modeling can help us to think outside of
the observational data for user modeling and personalization. In this paper, we
propose Causal Collaborative Filtering (CCF) -- a general framework for
modeling causality in collaborative filtering and recommendation. We first
provide a unified causal view of CF and mathematically show that many of the
traditional CF algorithms are actually special cases of CCF under simplified
causal graphs. We then propose a conditional intervention approach for
-calculus so that we can estimate the causal relations based on
observational data. Finally, we further propose a general counterfactual
constrained learning framework for estimating the user-item preferences.
Experiments are conducted on two types of real-world datasets -- traditional
and randomized trial data -- and results show that our framework can improve
the recommendation performance of many CF algorithms.Comment: 14 pages, 5 figures, 3 table
OpenAGI: When LLM Meets Domain Experts
Human intelligence has the remarkable ability to assemble basic skills into
complex ones so as to solve complex tasks. This ability is equally important
for Artificial Intelligence (AI), and thus, we assert that in addition to the
development of large, comprehensive intelligent models, it is equally crucial
to equip such models with the capability to harness various domain-specific
expert models for complex task-solving in the pursuit of Artificial General
Intelligence (AGI). Recent developments in Large Language Models (LLMs) have
demonstrated remarkable learning and reasoning abilities, making them promising
as a controller to select, synthesize, and execute external models to solve
complex tasks. In this project, we develop OpenAGI, an open-source AGI research
platform, specifically designed to offer complex, multi-step tasks and
accompanied by task-specific datasets, evaluation metrics, and a diverse range
of extensible models. OpenAGI formulates complex tasks as natural language
queries, serving as input to the LLM. The LLM subsequently selects,
synthesizes, and executes models provided by OpenAGI to address the task.
Furthermore, we propose a Reinforcement Learning from Task Feedback (RLTF)
mechanism, which uses the task-solving result as feedback to improve the LLM's
task-solving ability. Thus, the LLM is responsible for synthesizing various
external models for solving complex tasks, while RLTF provides feedback to
improve its task-solving ability, enabling a feedback loop for self-improving
AI. We believe that the paradigm of LLMs operating various expert models for
complex task-solving is a promising approach towards AGI. To facilitate the
community's long-term improvement and evaluation of AGI's ability, we
open-source the code, benchmark, and evaluation methods of the OpenAGI project
at https://github.com/agiresearch/OpenAGI.Comment: 18 pages, 6 figures, 7 table
Learning Personalized Risk Preferences for Recommendation
The rapid growth of e-commerce has made people accustomed to shopping online.
Before making purchases on e-commerce websites, most consumers tend to rely on
rating scores and review information to make purchase decisions. With this
information, they can infer the quality of products to reduce the risk of
purchase. Specifically, items with high rating scores and good reviews tend to
be less risky, while items with low rating scores and bad reviews might be
risky to purchase. On the other hand, the purchase behaviors will also be
influenced by consumers' tolerance of risks, known as the risk attitudes.
Economists have studied risk attitudes for decades. These studies reveal that
people are not always rational enough when making decisions, and their risk
attitudes may vary in different circumstances.
Most existing works over recommendation systems do not consider users' risk
attitudes in modeling, which may lead to inappropriate recommendations to
users. For example, suggesting a risky item to a risk-averse person or a
conservative item to a risk-seeking person may result in the reduction of user
experience. In this paper, we propose a novel risk-aware recommendation
framework that integrates machine learning and behavioral economics to uncover
the risk mechanism behind users' purchasing behaviors. Concretely, we first
develop statistical methods to estimate the risk distribution of each item and
then draw the Nobel-award winning Prospect Theory into our model to learn how
users choose from probabilistic alternatives that involve risks, where the
probabilities of the outcomes are uncertain. Experiments on several e-commerce
datasets demonstrate that our approach can achieve better performance than many
classical recommendation approaches, and further analyses also verify the
advantages of risk-aware recommendation beyond accuracy
AIOS: LLM Agent Operating System
The integration and deployment of large language model (LLM)-based
intelligent agents have been fraught with challenges that compromise their
efficiency and efficacy. Among these issues are sub-optimal scheduling and
resource allocation of agent requests over the LLM, the difficulties in
maintaining context during interactions between agent and LLM, and the
complexities inherent in integrating heterogeneous agents with different
capabilities and specializations. The rapid increase of agent quantity and
complexity further exacerbates these issues, often leading to bottlenecks and
sub-optimal utilization of resources. Inspired by these challenges, this paper
presents AIOS, an LLM agent operating system, which embeds large language model
into operating systems (OS) as the brain of the OS, enabling an operating
system "with soul" -- an important step towards AGI. Specifically, AIOS is
designed to optimize resource allocation, facilitate context switch across
agents, enable concurrent execution of agents, provide tool service for agents,
and maintain access control for agents. We present the architecture of such an
operating system, outline the core challenges it aims to resolve, and provide
the basic design and implementation of the AIOS. Our experiments on concurrent
execution of multiple agents demonstrate the reliability and efficiency of our
AIOS modules. Through this, we aim to not only improve the performance and
efficiency of LLM agents but also to pioneer for better development and
deployment of the AIOS ecosystem in the future. The project is open-source at
https://github.com/agiresearch/AIOS.Comment: 14 pages, 5 figures, 5 tables; comments and suggestions are
appreciate
GenRec: Large Language Model for Generative Recommendation
In recent years, large language models (LLM) have emerged as powerful tools
for diverse natural language processing tasks. However, their potential for
recommender systems under the generative recommendation paradigm remains
relatively unexplored. This paper presents an innovative approach to
recommendation systems using large language models (LLMs) based on text data.
In this paper, we present a novel LLM for generative recommendation (GenRec)
that utilized the expressive power of LLM to directly generate the target item
to recommend, rather than calculating ranking score for each candidate item one
by one as in traditional discriminative recommendation. GenRec uses LLM's
understanding ability to interpret context, learn user preferences, and
generate relevant recommendation. Our proposed approach leverages the vast
knowledge encoded in large language models to accomplish recommendation tasks.
We first we formulate specialized prompts to enhance the ability of LLM to
comprehend recommendation tasks. Subsequently, we use these prompts to
fine-tune the LLaMA backbone LLM on a dataset of user-item interactions,
represented by textual data, to capture user preferences and item
characteristics. Our research underscores the potential of LLM-based generative
recommendation in revolutionizing the domain of recommendation systems and
offers a foundational framework for future explorations in this field. We
conduct extensive experiments on benchmark datasets, and the experiments shows
that our GenRec has significant better results on large dataset
Fairness in Recommendation: Foundations, Methods and Applications
As one of the most pervasive applications of machine learning, recommender
systems are playing an important role on assisting human decision making. The
satisfaction of users and the interests of platforms are closely related to the
quality of the generated recommendation results. However, as a highly
data-driven system, recommender system could be affected by data or algorithmic
bias and thus generate unfair results, which could weaken the reliance of the
systems. As a result, it is crucial to address the potential unfairness
problems in recommendation settings. Recently, there has been growing attention
on fairness considerations in recommender systems with more and more literature
on approaches to promote fairness in recommendation. However, the studies are
rather fragmented and lack a systematic organization, thus making it difficult
to penetrate for new researchers to the domain. This motivates us to provide a
systematic survey of existing works on fairness in recommendation. This survey
focuses on the foundations for fairness in recommendation literature. It first
presents a brief introduction about fairness in basic machine learning tasks
such as classification and ranking in order to provide a general overview of
fairness research, as well as introduce the more complex situations and
challenges that need to be considered when studying fairness in recommender
systems. After that, the survey will introduce fairness in recommendation with
a focus on the taxonomies of current fairness definitions, the typical
techniques for improving fairness, as well as the datasets for fairness studies
in recommendation. The survey also talks about the challenges and opportunities
in fairness research with the hope of promoting the fair recommendation
research area and beyond.Comment: Accepted by ACM Transactions on Intelligent Systems and Technology
(TIST
Counterfactual Collaborative Reasoning
Causal reasoning and logical reasoning are two important types of reasoning
abilities for human intelligence. However, their relationship has not been
extensively explored under machine intelligence context. In this paper, we
explore how the two reasoning abilities can be jointly modeled to enhance both
accuracy and explainability of machine learning models. More specifically, by
integrating two important types of reasoning ability -- counterfactual
reasoning and (neural) logical reasoning -- we propose Counterfactual
Collaborative Reasoning (CCR), which conducts counterfactual logic reasoning to
improve the performance. In particular, we use recommender system as an example
to show how CCR alleviate data scarcity, improve accuracy and enhance
transparency. Technically, we leverage counterfactual reasoning to generate
"difficult" counterfactual training examples for data augmentation, which --
together with the original training examples -- can enhance the model
performance. Since the augmented data is model irrelevant, they can be used to
enhance any model, enabling the wide applicability of the technique. Besides,
most of the existing data augmentation methods focus on "implicit data
augmentation" over users' implicit feedback, while our framework conducts
"explicit data augmentation" over users explicit feedback based on
counterfactual logic reasoning. Experiments on three real-world datasets show
that CCR achieves better performance than non-augmented models and implicitly
augmented models, and also improves model transparency by generating
counterfactual explanations
The evolving landscape of big data analytics and ESG materiality mapping
Raging hurricanes, devastating floods, sea-level rise, heatwaves, and other extreme weather conditions are now attributed to climate change. The authors propose that climate change poses a significant investment risk in terms of economic losses and societal disruptions such as migration, infectious diseases, and increasing vulnerability of exposure to more frequently recurring weather events. They discuss optimum utilization of big data and data analytics along with artificial intelligence to assess the materiality of these potential risks in portfolios. Further, they highlight emerging and established approaches through two case studies to highlight how the overall investment management community can benchmark its exposure, risk, and vulnerabilities, coupled with future impacts and building resiliency, across portfolio management and investments.Abstrac
- …