41,579 research outputs found
Learning, Teaching, and Turn Taking in the Repeated Assignment Game
History-dependent strategies are often used to support cooperation in repeated game models. Using the indefinitely repeated common-pool resource assignment game and a perfect stranger experimental design, this paper reports novel evidence that players who have successfully used an efficiency-enhancing turn-taking strategy will teach other players in subsequent supergames to adopt this strategy. We find that subjects engage in turn taking frequently in both the Low Conflict and the High Conflict treatments. Prior experience with turn taking significantly increases turn taking in both treatments. Moreover, successful turn taking often involves fast learning, and individuals with turn taking experience are more likely to be teachers than inexperienced individuals. The comparative statics results show that teaching in such an environment also responds to incentives, since teaching is empirically more frequent in the Low Conflict treatment with higher benefits and lower costs.Learning, Teaching, Assignment Game, Laboratory Experiment, Repeated Games, Turn Taking, Common-Pool Resources
Agents for educational games and simulations
This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications
Recommended from our members
Towards Informed Exploration for Deep Reinforcement Learning
In this thesis, we discuss various techniques for improving exploration for deep reinforcement learning. We begin with a brief review of reinforcement learning (RL) and the fundamental v.s. exploitation trade-off. Then we review how deep RL has improved upon classical and summarize six categories of the latest exploration methods for deep RL, in the order increasing usage of prior information. We then explore representative works in three categories discuss their strengths and weaknesses. The first category, represented by Soft Q-learning, uses regularization to encourage exploration. The second category, represented by count-based via hashing, maps states to hash codes for counting and assigns higher exploration to less-encountered states. The third category utilizes hierarchy and is represented by modular architecture for RL agents to play StarCraft II. Finally, we conclude that exploration by prior knowledge is a promising research direction and suggest topics of potentially impact
Player agency in interactive narrative: audience, actor & author
The question motivating this review paper is, how can
computer-based interactive narrative be used as a constructivist learn-
ing activity? The paper proposes that player agency can be used to
link interactive narrative to learner agency in constructivist theory,
and to classify approaches to interactive narrative. The traditional
question driving research in interactive narrative is, âhow can an in-
teractive narrative deal with a high degree of player agency, while
maintaining a coherent and well-formed narrative?â This question
derives from an Aristotelian approach to interactive narrative that,
as the question shows, is inherently antagonistic to player agency.
Within this approach, player agency must be restricted and manip-
ulated to maintain the narrative. Two alternative approaches based
on Brechtâs Epic Theatre and Boalâs Theatre of the Oppressed are
reviewed. If a Boalian approach to interactive narrative is taken the
conflict between narrative and player agency dissolves. The question
that emerges from this approach is quite different from the traditional
question above, and presents a more useful approach to applying in-
teractive narrative as a constructivist learning activity
Little Information, Efficiency, and Learning - An Experimental Study
Earlier experiments have shown that under little information subjects are hardly able to coordinate even though there are no conflicting interests and subjects are organised in fixed pairs. This is so, even though a simple adjustment process would lead the subjects into the efficient, fair and individually payoff maximising outcome. We draw on this finding and design an experiment in which subjects re-peatedly play 4 simple games within 4 sets of 40 rounds under little information. This way we are able to investigate (i) the coordination abilities of the subjects depending on the underlying game, (ii) the resulting efficiency loss, and (iii) the adjustment of the learning rule.mutual fate control, matching pennies, fate-control behaviour- control, learning, coordination, little information
Stochastic learning in co-ordination games : a simulation approach
In the presence of externalities, consumption behaviour depends on the solution of a co-ordination problem. In our paper we suggest a learning approach to the study of co-ordination in consumption contexts where agents adjust their choices on the basis of the reinforcement (payoff) they receive during the game. The results of simulations allowed us to distinguish the roles of different aspects of learning in enabling co-ordination within a population of agents. Our main results highlight: 1. the role played by the speed of learning in determining failures of the co-ordination process; 2. the effect of forgetting past experiences on the speed of the co-ordination process; 3. the role of experimentation in bringing the process of co-ordination into an efficient equilibrium
Virtual Reality Games for Motor Rehabilitation
This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any productâs acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion
- âŠ