3 research outputs found

    Multiagent systems: games and learning from structures

    Get PDF
    Multiple agents have become increasingly utilized in various fields for both physical robots and software agents, such as search and rescue robots, automated driving, auctions and electronic commerce agents, and so on. In multiagent domains, agents interact and coadapt with other agents. Each agent's choice of policy depends on the others' joint policy to achieve the best available performance. During this process, the environment evolves and is no longer stationary, where each agent adapts to proceed towards its target. Each micro-level step in time may present a different learning problem which needs to be addressed. However, in this non-stationary environment, a holistic phenomenon forms along with the rational strategies of all players; we define this phenomenon as structural properties. In our research, we present the importance of analyzing the structural properties, and how to extract the structural properties in multiagent environments. According to the agents' objectives, a multiagent environment can be classified as self-interested, cooperative, or competitive. We examine the structure from these three general multiagent environments: self-interested random graphical game playing, distributed cooperative team playing, and competitive group survival. In each scenario, we analyze the structure in each environmental setting, and demonstrate the structure learned as a comprehensive representation: structure of players' action influence, structure of constraints in teamwork communication, and structure of inter-connections among strategies. This structure represents macro-level knowledge arising in a multiagent system, and provides critical, holistic information for each problem domain. Last, we present some open issues and point toward future research

    Approaches to multi-agent learning

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (leaves 165-171).Systems involving multiple autonomous entities are becoming more and more prominent. Sensor networks, teams of robotic vehicles, and software agents are just a few examples. In order to design these systems, we need methods that allow our agents to autonomously learn and adapt to the changing environments they find themselves in. This thesis explores ideas from game theory, online prediction, and reinforcement learning, tying them together to work on problems in multi-agent learning. We begin with the most basic framework for studying multi-agent learning: repeated matrix games. We quickly realize that there is no such thing as an opponent-independent, globally optimal learning algorithm. Some form of opponent assumptions must be necessary when designing multi-agent learning algorithms. We first show that we can exploit opponents that satisfy certain assumptions, and in a later chapter, we show how we can avoid being exploited ourselves. From this beginning, we branch out to study more complex sequential decision making problems in multi-agent systems, or stochastic games. We study environments in which there are large numbers of agents, and where environmental state may only be partially observable.(cont.) In fully cooperative situations, where all the agents receive a single global reward signal for training, we devise a filtering method that allows each individual agent to learn using a personal training signal recovered from this global reward. For non-cooperative situations, we introduce the concept of hedged learning, a combination of regret-minimizing algorithms with learning techniques, which allows a more flexible and robust approach for behaving in competitive situations. We show various performance bounds that can be guaranteed with our hedged learning algorithm, thus preventing our agent from being exploited by its adversary. Finally, we apply some of these methods to problems involving routing and node movement in a mobilized ad-hoc networking domain.by Yu-Han Chang.Ph.D
    corecore