4 research outputs found

    Multi-agent Learning For Game-theoretical Problems

    Get PDF
    Multi-agent systems are prevalent in the real world in various domains. In many multi-agent systems, interaction among agents is inevitable, and cooperation in some form is needed among agents to deal with the task at hand. We model the type of multi-agent systems where autonomous agents inhabit an environment with no global control or global knowledge, decentralized in the true sense. In particular, we consider game-theoretical problems such as the hedonic coalition formation games, matching problems, and Cournot games. We propose novel decentralized learning and multi-agent reinforcement learning approaches to train agents in learning behaviors and adapting to the environments. We use game-theoretic evaluation criteria such as optimality, stability, and resulting equilibria

    Personalizing Task-oriented Dialog Systems via Zero-shot Generalizable Reward Function

    Full text link
    Task-oriented dialog systems enable users to accomplish tasks using natural language. State-of-the-art systems respond to users in the same way regardless of their personalities, although personalizing dialogues can lead to higher levels of adoption and better user experiences. Building personalized dialog systems is an important, yet challenging endeavor and only a handful of works took on the challenge. Most existing works rely on supervised learning approaches and require laborious and expensive labeled training data for each user profile. Additionally, collecting and labeling data for each user profile is virtually impossible. In this work, we propose a novel framework, P-ToD, to personalize task-oriented dialog systems capable of adapting to a wide range of user profiles in an unsupervised fashion using a zero-shot generalizable reward function. P-ToD uses a pre-trained GPT-2 as a backbone model and works in three phases. Phase one performs task-specific training. Phase two kicks off unsupervised personalization by leveraging the proximal policy optimization algorithm that performs policy gradients guided by the zero-shot generalizable reward function. Our novel reward function can quantify the quality of the generated responses even for unseen profiles. The optional final phase fine-tunes the personalized model using a few labeled training examples. We conduct extensive experimental analysis using the personalized bAbI dialogue benchmark for five tasks and up to 180 diverse user profiles. The experimental results demonstrate that P-ToD, even when it had access to zero labeled examples, outperforms state-of-the-art supervised personalization models and achieves competitive performance on BLEU and ROUGE metrics when compared to a strong fully-supervised GPT-2 baselineComment: 11 pages, 4 tables, 31st ACM International Conference on Information and Knowledge Management (CIKM'22

    Multi-agent Reinforcement Learning for Decentralized Coalition Formation Games

    No full text
    We study the application of multi-agent reinforcement learning for game-theoretical problems. In particular, we are interested in coalition formation problems and their variants such as hedonic coalition formation games (also called hedonic games), matching (a common type of hedonic game), and coalition formation for task allocation. We consider decentralized multi-agent systems where autonomous agents inhabit an environment without any prior knowledge of other agents or the system. We also consider spatial formulations of these problems. Most of the literature for coalition formation problems does not consider these formulations of the problems because it increases computational complexity significantly. We propose novel decentralized heuristic learning and multi-agent reinforcement learning (MARL) approaches to train agents, and we use game-theoretic evaluation criteria such as optimality, stability, and indices like Shapley value
    corecore