21 research outputs found
Predicting the expected behavior of agents that learn about agents: the CLRI framework
We describe a framework and equations used to model and predict the behavior
of multi-agent systems (MASs) with learning agents. A difference equation is
used for calculating the progression of an agent's error in its decision
function, thereby telling us how the agent is expected to fare in the MAS. The
equation relies on parameters which capture the agent's learning abilities,
such as its change rate, learning rate and retention rate, as well as relevant
aspects of the MAS such as the impact that agents have on each other. We
validate the framework with experimental results using reinforcement learning
agents in a market system, as well as with other experimental results gathered
from the AI literature. Finally, we use PAC-theory to show how to calculate
bounds on the values of the learning parameters
Learning from induced changes in opponent (re)actions in multi-agent games
Multi-agent learning is a growing area of research. An important topic is to formulate how an agent can learn a good policy in the face of adaptive, competitive opponents. Most research has focused on extensions of single agent learning techniques originally designed for agents in more static environments. These techniques however fail to incorporate a notion of the effect of own previous actions on the development of the policy of the other agents in the system. We argue that incorporation of this property is beneficial in competitive settings. In this paper, we present a novel algorithm to capture this notion, and present experimental results to validate our claim
Computational Markets to Regulate Mobile-Agent Systems
Mobile-agent systems allow applications to distribute their resource consumption across the network. By prioritizing applications and publishing the cost of actions, it is possible for applications to achieve faster performance than in an environment where resources are evenly shared. We enforce the costs of actions through markets where user applications bid for computation from host machines. \par We represent applications as collections of mobile agents and introduce a distributed mechanism for allocating general computational priority to mobile agents. We derive a bidding strategy for an agent that plans expenditures given a budget and a series of tasks to complete. We also show that a unique Nash equilibrium exists between the agents under our allocation policy. We present simulation results to show that the use of our resource-allocation mechanism and expenditure-planning algorithm results in shorter mean job completion times compared to traditional mobile-agent resource allocation. We also observe that our resource-allocation policy adapts favorably to allocate overloaded resources to higher priority agents, and that agents are able to effectively plan expenditures even when faced with network delay and job-size estimation error
Recommended from our members
Average-reward reinforcement learning for product delivery by multiple vehicles
Real time delivery of products is the context of stochastic demands and multiple vehicles is a difficult problem as it requires the joint investigation of the problems in inventory control and vehicle routing. We model this problem in the framework of Average reward Reinforcement Learning (ARL) and present experimental results on several ARL algorithms including a novel model-free algorithm called AR learning that automatically explores the state space while always choosing the greedy action with respect to the current approximate value function. Another contribution is a hybrid of linear and feature based function approximation method that yields superior performance to either method
Emergent Properties of a Market-based Digital Library with Strategic Agents
The University of Michigan Digital Library (UMDL) is designed as an open system that allows third parties to build and integrate their own profit-seeking agents into the marketplace of information goods and services. The profit-seeking behavior of agents, however, risks inefficient allocation of goods and services, as agents take strategic stances that might backfire. While it would be good if we could impose mechanisms to remove incentives for strategic reasoning, this is not possible in the UMDL. Therefore, our approach has instead been to study whether encouraging the other extreme—making strategic reasoning ubiquitous—provides an answer.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/43993/1/10458_2004_Article_251209.pd