408 research outputs found
Simple Coalitional Games with Beliefs
We introduce coalitional games with beliefs (CGBs), a natural generalization of coalitional games to environments where agents possess private beliefs regarding the capabilities (or types) of others. We put forward a model to capture such agent-type uncertainty, and study coalitional stability in this setting. Specifically, we introduce a notion of the core for CGBs, both with and without coalition structures. For simple games without coalition structures, we then provide a characterization of the core that matches the one for the full information case, and use it to derive a polynomial-time algorithm to check core nonemptiness. In contrast, we demonstrate that in games with coalition structures allowing beliefs increases the computational complexity of stability-related problems. In doing so, we introduce and analyze weighted voting games with beliefs, which may be of independent interest. Finally, we discuss connections between our model and other classes of coalitional games
Cooperatives for demand side management
We propose a new scheme for efficient demand side management for the Smart Grid. Specifically, we envisage and promote the formation of cooperatives of medium-large consumers and equip them (via our proposed mechanisms) with the capability of regularly participating in the existing electricity markets by providing electricity demand reduction services to the Grid. Based on mechanism design principles, we develop a model for such cooperatives by designing methods for estimating suitable reduction amounts, placing bids in the market and redistributing the obtained revenue amongst the member agents. Our mechanism is such that the member agents have no incentive to show artificial reductions with the aim of increasing their revenue
Sequential Decision Making with Untrustworthy Service Providers
In this paper, we deal with the sequential decision making problem of agents operating in computational economies, where there is uncertainty regarding the trustworthiness of service providers populating the environment. Specifically, we propose a generic Bayesian trust model, and formulate the optimal Bayesian solution to the exploration-exploitation problem facing the agents when repeatedly interacting with others in such environments. We then present a computationally tractable Bayesian reinforcement learning algorithm to approximate that solution by taking into account the expected value of perfect information of an agent's actions. Our algorithm is shown to dramatically outperform all previous finalists of the international Agent Reputation and Trust (ART) competition, including the winner from both years the competition has been run
- …