CORE
🇺🇦
make metadata, not war
Services
Services overview
Explore all CORE services
Access to raw data
API
Dataset
FastSync
Content discovery
Recommender
Discovery
OAI identifiers
OAI Resolver
Managing content
Dashboard
Bespoke contracts
Consultancy services
Support us
Support us
Membership
Sponsorship
Community governance
Advisory Board
Board of supporters
Research network
About
About us
Our mission
Team
Blog
FAQs
Contact us
research
Reinforcement Learning for Nash Equilibrium Generation
Authors
D Cittern
A Edalat
Publication date
28 January 2015
Publisher
International Foundation for Autonomous Agents and Multiagent Systems
Abstract
Copyright © 2015, International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.We propose a new conceptual multi-agent framework which, given a game with an undesirable Nash equilibrium, will almost surely generate a new Nash equilibrium at some predetennined, more desirable pure action profile. The agent(s) targeted for reinforcement learn independently according to a standard model-free algorithm, using internally-generated states corresponding to high-level preference rankings over outcomes. We focus in particular on the case in which the additional reward can be considered as resulting from an internal (re-)appraisal, such that the new equilibrium is stable independent of the continued application of the procedure
Similar works
Full text
Open in the Core reader
Download PDF
Available Versions
Supporting member
Spiral - Imperial College Digital Repository
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:spiral.imperial.ac.uk:1004...
Last time updated on 17/02/2017