9,355 research outputs found
The Specification of Requirements in the MADAE-Pro Software Process
MADAE-Pro is an ontology-driven process for multi-agent domain and application engineering which promotes the construction and reuse of agent-oriented applications families. This article introduces MADAE-Pro, emphasizing the description of its domain analysis and application requirements engineering phases and showing how software artifacts produced from the first are reused in the last one. Illustrating examples are extracted from two case studies we have conducted to evaluate MADAE-Pro. The first case study assesses the Multi-Agent Domain Engineering sub-process of MADAE-Pro through the development of a multi-agent system family of recommender systems supporting alternative (collaborative, content-based and hybrid) filtering techniques. The second one, evaluates the Multi-Agent Application Engineering sub-process of MADAE-Pro through the construction of InfoTrib, a Tax Law recommender system which provides recommendations based on new tax law information items using a content-based filtering technique. ONTOSERS and InfoTrib were modeled using ONTORMAS, a knowledge-based tool for supporting and automating the tasks of MADAEPro
Optimism Based Exploration in Large-Scale Recommender Systems
Bandit learning algorithms have been an increasingly popular design choice
for recommender systems. Despite the strong interest in bandit learning from
the community, there remains multiple bottlenecks that prevent many bandit
learning approaches from productionalization. Two of the most important
bottlenecks are scaling to multi-task and A/B testing. Classic bandit
algorithms, especially those leveraging contextual information, often requires
reward for uncertainty estimation, which hinders their adoptions in multi-task
recommender systems. Moreover, different from supervised learning algorithms,
bandit learning algorithms emphasize greatly on the data collection process
through their explorative nature. Such explorative behavior induces unfair
evaluation for bandit learning agents in a classic A/B test setting. In this
work, we present a novel design of production bandit learning life-cycle for
recommender systems, along with a novel set of metrics to measure their
efficiency in user exploration. We show through large-scale production
recommender system experiments and in-depth analysis that our bandit agent
design improves personalization for the production recommender system and our
experiment design fairly evaluates the performance of bandit learning
algorithms
User evaluation of a market-based recommender system
Recommender systems have been developed for a wide variety of applications (ranging from books, to holidays, to web pages). These systems have used a number of different approaches, since no one technique is best for all users in all situations. Given this, we believe that to be effective, systems should incorporate a wide variety of such techniques and then some form of overarching framework should be put in place to coordinate them so that only the best recommendations (from whatever source) are presented to the user. To this end, in our previous work, we detailed a market-based approach in which various recommender agents competed with one another to present their recommendations to the user. We showed through theoretical analysis and empirical evaluation with simulated users that an appropriately designed marketplace should be able to provide effective coordination. Building on this, we now report on the development of this multi-agent system and its evaluation with real users. Specifically, we show that our system is capable of consistently giving high quality recommendations, that the best recommendations that could be put forward are actually put forward, and that the combination of recommenders performs better than any constituent recommende
Whole-Chain Recommendations
With the recent prevalence of Reinforcement Learning (RL), there have been
tremendous interests in developing RL-based recommender systems. In practical
recommendation sessions, users will sequentially access multiple scenarios,
such as the entrance pages and the item detail pages, and each scenario has its
specific characteristics. However, the majority of existing RL-based
recommender systems focus on optimizing one strategy for all scenarios or
separately optimizing each strategy, which could lead to sub-optimal overall
performance. In this paper, we study the recommendation problem with multiple
(consecutive) scenarios, i.e., whole-chain recommendations. We propose a
multi-agent RL-based approach (DeepChain), which can capture the sequential
correlation among different scenarios and jointly optimize multiple
recommendation strategies. To be specific, all recommender agents (RAs) share
the same memory of users' historical behaviors, and they work collaboratively
to maximize the overall reward of a session. Note that optimizing multiple
recommendation strategies jointly faces two challenges in the existing
model-free RL model - (i) it requires huge amounts of user behavior data, and
(ii) the distribution of reward (users' feedback) are extremely unbalanced. In
this paper, we introduce model-based RL techniques to reduce the training data
requirement and execute more accurate strategy updates. The experimental
results based on a real e-commerce platform demonstrate the effectiveness of
the proposed framework.Comment: 29th ACM International Conference on Information and Knowledge
Managemen
Incentive-Aware Recommender Systems in Two-Sided Markets
Online platforms in the Internet Economy commonly incorporate recommender
systems that recommend arms (e.g., products) to agents (e.g., users). In such
platforms, a myopic agent has a natural incentive to exploit, by choosing the
best product given the current information rather than to explore various
alternatives to collect information that will be used for other agents. We
propose a novel recommender system that respects agents' incentives and enjoys
asymptotically optimal performances expressed by the regret in repeated games.
We model such an incentive-aware recommender system as a multi-agent bandit
problem in a two-sided market which is equipped with an incentive constraint
induced by agents' opportunity costs. If the opportunity costs are known to the
principal, we show that there exists an incentive-compatible recommendation
policy, which pools recommendations across a genuinely good arm and an unknown
arm via a randomized and adaptive approach. On the other hand, if the
opportunity costs are unknown to the principal, we propose a policy that
randomly pools recommendations across all arms and uses each arm's cumulative
loss as feedback for exploration. We show that both policies also satisfy an
ex-post fairness criterion, which protects agents from over-exploitation
AntRS: Recommending Lists through a Multi-Objective Ant Colony System
International audienceWhen people use recommender systems, they generally expect coherent lists of items. Depending on the application domain, it can be a playlist of songs they are likely to enjoy in their favorite online music service, a set of educational resources to acquire new competencies through an intelligent tutoring system, or a sequence of exhibits to discover from an adaptive mobile museum guide. To make these lists coherent from the users' perspective, recommendations must find the best compromise between multiple objectives (best possible precision, need for diversity and novelty). We propose to achieve that goal through a multi-agent recommender system, called AntRS. We evaluated our approach with a music dataset with about 500 users and more than 13,000 sessions. The experiments show that we obtain good results as regards to precision, novelty and coverage in comparison with typical state-of-the-art single and multi-objective algorithms
A Multi Agent Recommender System that Utilises Consumer Reviews in its Recommendations
Consumer reviews, opinions and shared experiences in the use of a product form a powerful source of information about consumer preferences that can be used for making recommendations. A novel approach, which utilises this valuable information sources first time to create recommendations in recommender agents was recently developed by Aciar et al. (2007). This paper presents a general framework of this approach. The proposed approach is demonstrated using digital camera reviews as an example
- ā¦