110,292 research outputs found
A multi-agent game for studying human decision-making.
ABSTRACT Understanding how human beings delegate tasks to trustees when presented with reputation information is important for building trust models for human-agent collectives. However, there is a lack of suitable platforms for building large scale datasets on this topic. We describe a demonstration of a multi-agent game for training students in the practice of Agile software engineering. Through interacting with agent competitors in the game, the behavior data related to users' decision-making process under uncertainty and resource constraints are collected in an unobtrusive fashion. These data may provide multi-agent trust researchers with new insight into the human decision-making process, and help them benchmark their proposed models
Research Agenda for Studying Open Source II: View Through the Lens of Referent Discipline Theories
In a companion paper [Niederman et al., 2006] we presented a multi-level research agenda for studying information systems using open source software. This paper examines open source in terms of MIS and referent discipline theories that are the base needed for rigorous study of the research agenda
Does human imitate successful behaviors immediately?
The emergence and abundance of cooperation in animal and human societies is a challenging puzzle to evolutionary biology. Over the past decades, various mechanisms have been suggested which are capable of supporting cooperation. Imitation dynamics, however, are the most representative microscopic rules of human behaviors on studying these mechanisms. Their standard procedure is to choose the agent to imitate at random from the population. In the spatial version this means a random agent from the neighborhood. Hence, imitation rules do not include the possibility to explore the available strategies, and then they have the possibility to reach a homogeneous state rapidly when the population size is small. To prevent evolution stopping, theorists allow for random mutations in addition to the imitation dynamics. Consequently, if the microscopic rules involve both imitation and mutation, the frequency of agents switching to the more successful strategy must be higher than that of them transiting to the same target strategy via mutation dynamics. Here we show experimentally that the frequency of switching to successful strategy approximates to that of mutating to the same strategy. This suggests that imitation might play an insignificant role on the behaviors of human decision making. In addition, our experiments show that the probabilities of agents mutating to different target strategies are significantly distinct. The actual mutation theories cannot give us an appropriate explanation to the experimental results. Hence, we argue that the mutation dynamics might have evolved for other reasons
Interactive Restless Multi-armed Bandit Game and Swarm Intelligence Effect
We obtain the conditions for the emergence of the swarm intelligence effect
in an interactive game of restless multi-armed bandit (rMAB). A player competes
with multiple agents. Each bandit has a payoff that changes with a probability
per round. The agents and player choose one of three options: (1)
Exploit (a good bandit), (2) Innovate (asocial learning for a good bandit among
randomly chosen bandits), and (3) Observe (social learning for a good
bandit). Each agent has two parameters to specify the decision:
(i) , the threshold value for Exploit, and (ii) , the probability
for Observe in learning. The parameters are uniformly
distributed. We determine the optimal strategies for the player using complete
knowledge about the rMAB. We show whether or not social or asocial learning is
more optimal in the space and define the swarm intelligence
effect. We conduct a laboratory experiment (67 subjects) and observe the swarm
intelligence effect only if are chosen so that social learning
is far more optimal than asocial learning.Comment: 18 pages, 4 figure
Heuristics in Multi-Winner Approval Voting
In many real world situations, collective decisions are made using voting.
Moreover, scenarios such as committee or board elections require voting rules
that return multiple winners. In multi-winner approval voting (AV), an agent
may vote for as many candidates as they wish. Winners are chosen by tallying up
the votes and choosing the top- candidates receiving the most votes. An
agent may manipulate the vote to achieve a better outcome by voting in a way
that does not reflect their true preferences. In complex and uncertain
situations, agents may use heuristics to strategize, instead of incurring the
additional effort required to compute the manipulation which most favors them.
In this paper, we examine voting behavior in multi-winner approval voting
scenarios with complete information. We show that people generally manipulate
their vote to obtain a better outcome, but often do not identify the optimal
manipulation. Instead, voters tend to prioritize the candidates with the
highest utilities. Using simulations, we demonstrate the effectiveness of these
heuristics in situations where agents only have access to partial information
- …