575,316 research outputs found
Designing multiplayer games to facilitate emergent social behaviours online
This paper discusses an exploratory case study of the design of games that facilitate spontaneous social interaction and group behaviours among distributed individuals, based largely on symbolic presence 'state' changes. We present the principles guiding the design of our game environment: presence as a symbolic phenomenon, the importance of good visualization and the potential for spontaneous self-organization among groups of people. Our game environment, comprising a family of multiplayer 'bumper-car' style games, is described, followed by a discussion of lessons learned from observing users of the environment. Finally, we reconsider and extend our design principles in light of our observations
Deterministic Markov Nash equilibria for potential discrete-time stochastic games
summary:In this paper, we study the problem of finding deterministic (also known as feedback or closed-loop) Markov Nash equilibria for a class of discrete-time stochastic games. In order to establish our results, we develop a potential game approach based on the dynamic programming technique. The identified potential stochastic games have Borel state and action spaces and possibly unbounded nondifferentiable cost-per-stage functions. In particular, the team (or coordination) stochastic games and the stochastic games with an action independent transition law are covered
The Dreaming Variational Autoencoder for Reinforcement Learning Environments
Reinforcement learning has shown great potential in generalizing over raw
sensory data using only a single neural network for value optimization. There
are several challenges in the current state-of-the-art reinforcement learning
algorithms that prevent them from converging towards the global optima. It is
likely that the solution to these problems lies in short- and long-term
planning, exploration and memory management for reinforcement learning
algorithms. Games are often used to benchmark reinforcement learning algorithms
as they provide a flexible, reproducible, and easy to control environment.
Regardless, few games feature a state-space where results in exploration,
memory, and planning are easily perceived. This paper presents The Dreaming
Variational Autoencoder (DVAE), a neural network based generative modeling
architecture for exploration in environments with sparse feedback. We further
present Deep Maze, a novel and flexible maze engine that challenges DVAE in
partial and fully-observable state-spaces, long-horizon tasks, and
deterministic and stochastic problems. We show initial findings and encourage
further work in reinforcement learning driven by generative exploration.Comment: Best Student Paper Award, Proceedings of the 38th SGAI International
Conference on Artificial Intelligence, Cambridge, UK, 2018, Artificial
Intelligence XXXV, 201
When games meet learning
International audienceIn a context characterized by a growing gap between youth digital culture and school culture some have claimed that games could "have the potential to change the landscape of education" (Shaffer, Squire, Halverson, & Gee, 2005). This paper examines the arguments and objections to using games for educational purposes. Firstly, we state that making a connection between gaming and learning is not an innovative idea, as early researchers demonstrated the potential of games in child development. Secondly, we establish arguments to consider serious games as learning environments, (or didactical situations) prior to artifacts, Thirdly, in view of the fact that the content of games can be considered as metaphors of real situations, we stipulate that teachers may address the question of the relevance of this content. Fourthly, we discuss the main arguments usually emphasized by researchers to consider that games have the power to motivate students, Fifthly, we state that games can be considered as a space of reflexivity where the learner/player is autonomous and develops skills. However, we emphasize the crucial role of the teacher in a Game-Based Learning approach
FlashRL: A Reinforcement Learning Platform for Flash Games
Reinforcement Learning (RL) is a research area that has blossomed tremendously in recent years and has shown remarkable potential in among others successfully playing computer games. However, there only exists a few game platforms that provide diversity in tasks and state- space needed to advance RL algorithms. The existing platforms offer RL access to Atari- and a few web-based games, but no platform fully expose access to Flash games. This is unfortunate because applying RL to Flash games have potential to push the research of RL algorithms.
This paper introduces the Flash Reinforcement Learning platform (FlashRL) which attempts to fill this gap by providing an environment for thousands of Flash games on a novel platform for Flash automation. It opens up easy experimentation with RL algorithms for Flash games, which has previously been challenging. The platform shows excellent performance with as little as 5% CPU utilization on consumer hardware. It shows promising results for novel reinforcement learning algorithms
A semantic event detection approach for soccer video based on perception concepts and finite state machines
A significant application area for automated video analysis technology is the generation of personalized highlights of sports events. Sports games are always composed of a range of significant events. Automatically detecting these events in a sports video can enable users to interactively select their own highlights. In this paper we propose a semantic event detection approach based on Perception Concepts and Finite State Machines to automatically detect significant events within soccer video. Firstly we define a Perception Concept set for soccer videos based on identifiable feature elements within a soccer video. Secondly we design PC-FSM models to describe semantic events in soccer videos. A particular strength of this approach is that users are able to design their own semantic events and transfer event detection into graph matching. Experimental results based on recorded soccer broadcasts are used to illustrate the potential of this approach
FlashRL: A Reinforcement Learning Platform for Flash Games
Reinforcement Learning (RL) is a research area that has blossomed
tremendously in recent years and has shown remarkable potential in among others
successfully playing computer games. However, there only exists a few game
platforms that provide diversity in tasks and state-space needed to advance RL
algorithms. The existing platforms offer RL access to Atari- and a few
web-based games, but no platform fully expose access to Flash games. This is
unfortunate because applying RL to Flash games have potential to push the
research of RL algorithms.
This paper introduces the Flash Reinforcement Learning platform (FlashRL)
which attempts to fill this gap by providing an environment for thousands of
Flash games on a novel platform for Flash automation. It opens up easy
experimentation with RL algorithms for Flash games, which has previously been
challenging. The platform shows excellent performance with as little as 5% CPU
utilization on consumer hardware. It shows promising results for novel
reinforcement learning algorithms.Comment: 12 Pages, Proceedings of the 30th Norwegian Informatics Conference,
Oslo, Norway 201
- …