77,447 research outputs found
An Interaction Game Framework for the Investigation of Human–Agent Cooperation
Kulms P, Mattar N, Kopp S. An Interaction Game Framework for the Investigation of Human–Agent Cooperation. In: Brinkman WP, Broekens J, Heylen DKJ, eds. Intelligent Virtual Agents. Lecture Notes in Computer Science: Vol. 9238. Springer; 2015: 399-402.Success in human-agent interaction will to a large extent depend on the ability of the system to cooperate with humans over repeated tasks. It is not yet clear how cooperation between humans and virtual agents evolves and is interlinked with the attribution of qualities like trustworthiness or competence between the cooperation partners. To explore these questions, we present a new interaction game framework that is centered around a collaborative puzzle game and goes beyond commonly adopted scenarios like the Prisoner’s dilemma. First results are presented at the conference
Learning in Repeated Games: Human Versus Machine
While Artificial Intelligence has successfully outperformed humans in complex
combinatorial games (such as chess and checkers), humans have retained their
supremacy in social interactions that require intuition and adaptation, such as
cooperation and coordination games. Despite significant advances in learning
algorithms, most algorithms adapt at times scales which are not relevant for
interactions with humans, and therefore the advances in AI on this front have
remained of a more theoretical nature. This has also hindered the experimental
evaluation of how these algorithms perform against humans, as the length of
experiments needed to evaluate them is beyond what humans are reasonably
expected to endure (max 100 repetitions). This scenario is rapidly changing, as
recent algorithms are able to converge to their functional regimes in shorter
time-scales. Additionally, this shift opens up possibilities for experimental
investigation: where do humans stand compared with these new algorithms? We
evaluate humans experimentally against a representative element of these
fast-converging algorithms. Our results indicate that the performance of at
least one of these algorithms is comparable to, and even exceeds, the
performance of people
Predicting Human Cooperation
The Prisoner's Dilemma has been a subject of extensive research due to its
importance in understanding the ever-present tension between individual
self-interest and social benefit. A strictly dominant strategy in a Prisoner's
Dilemma (defection), when played by both players, is mutually harmful.
Repetition of the Prisoner's Dilemma can give rise to cooperation as an
equilibrium, but defection is as well, and this ambiguity is difficult to
resolve. The numerous behavioral experiments investigating the Prisoner's
Dilemma highlight that players often cooperate, but the level of cooperation
varies significantly with the specifics of the experimental predicament. We
present the first computational model of human behavior in repeated Prisoner's
Dilemma games that unifies the diversity of experimental observations in a
systematic and quantitatively reliable manner. Our model relies on data we
integrated from many experiments, comprising 168,386 individual decisions. The
computational model is composed of two pieces: the first predicts the
first-period action using solely the structural game parameters, while the
second predicts dynamic actions using both game parameters and history of play.
Our model is extremely successful not merely at fitting the data, but in
predicting behavior at multiple scales in experimental designs not used for
calibration, using only information about the game structure. We demonstrate
the power of our approach through a simulation analysis revealing how to best
promote human cooperation.Comment: Added references. New inline citation style. Added small portions of
text. Re-compiled Rmarkdown file with updated ggplot2 so small aesthetic
changes to plot
Evolutionary prisoner's dilemma games on the network with punishment and opportunistic partner switching
Punishment and partner switching are two well-studied mechanisms that support
the evolution of cooperation. Observation of human behaviour suggests that the
extent to which punishment is adopted depends on the usage of alternative
mechanisms, including partner switching. In this study, we investigate the
combined effect of punishment and partner switching in evolutionary prisoner's
dilemma games conducted on a network. In the model, agents are located on the
network and participate in the prisoner's dilemma games with punishment. In
addition, they can opportunistically switch interaction partners to improve
their payoff. Our Monte Carlo simulation showed that a large frequency of
punishers is required to suppress defectors when the frequency of partner
switching is low. In contrast, cooperation is the most abundant strategy when
the frequency of partner switching is high regardless of the strength of
punishment. Interestingly, cooperators become abundant not because they avoid
the cost of inflicting punishment and earn a larger average payoff per game but
rather because they have more numerous opportunities to be referred as a role
agent by defectors. Our results imply that the fluidity of social relationships
has a profound effect on the adopted strategy in maintaining cooperation.Comment: 10 pages, 1 table, 8 figures; Figs 6 and 7 are appended to reflect
reviewers' suggestions. Accepted for publication in EPL (Europhysics Letters
Evolution of Cooperation among Mobile Agents
We study the effects of mobility on the evolution of cooperation among mobile
players, which imitate collective motion of biological flocks and interact with
neighbors within a prescribed radius . Adopting the prisoner's dilemma game
and the snowdrift game as metaphors, we find that cooperation can be maintained
and even enhanced for low velocities and small payoff parameters, when compared
with the case that all agents do not move. But such enhancement of cooperation
is largely determined by the value of , and for modest values of , there
is an optimal value of velocity to induce the maximum cooperation level.
Besides, we find that intermediate values of or initial population
densities are most favorable for cooperation, when the velocity is fixed.
Depending on the payoff parameters, the system can reach an absorbing state of
cooperation when the snowdrift game is played. Our findings may help
understanding the relations between individual mobility and cooperative
behavior in social systems.Comment: 15 pages, 5 figure
- …