7 research outputs found

    Learning in zero–sum team Markov games using factored value functions

    No full text
    Summarization: We present a new method for learning good strategies in zero-sum Markov games in which each side is composed of multiple agents collaborating against an opposing team of agents. Our method requires full observability and communication during learning, but the learned policies can be executed in a distributed manner. The value function is represented as a factored linear architecture and its structure determines the necessary computational resources and communication bandwidth. This approach permits a tradeoff between simple representations with little or no communication between agents and complex, computationally intensive representations with extensive coordination between agents. Thus, we provide a principled means of using approximation to combat the exponential blowup in the joint action space of the participants. The approach is demonstrated with an example that shows the efficiency gains over naive enumeration.Παρουσιάστηκε στο: Neural Information Processing System

    The estimation of reward and value in reinforcement learning

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore