27 research outputs found
Detecting Metagame Shifts in League of Legends Using Unsupervised Learning
Over the many years since their inception, the complexity of video games has risen considerably. With this increase in complexity comes an increase in the number of possible choices for players and increased difficultly for developers who try to balance the effectiveness of these choices. In this thesis we demonstrate that unsupervised learning can give game developers extra insight into their own games, providing them with a tool that can potentially alert them to problems faster than they would otherwise be able to find. Specifically, we use DBSCAN to look at League of Legends and the metagame players have formed with their choices and attempt to detect when the metagame shifts possibly giving the developer insight into what changes they should affect to achieve a more balanced, fun game
Analysis of gameplay strategies in hearthstone: a data science approach
In recent years, games have been a popular test bed for AI research, and the presence of Collectible Card Games (CCGs) in that space is still increasing. One such CCG for both competitive/casual play and AI research is Hearthstone, a two-player adversarial game where players seeks to implement one of several gameplay strategies to defeat their opponent and decrease all of their Health points to zero. Although some open source simulators exist, some of their methodologies for simulated agents create opponents with a relatively low skill level. Using evolutionary algorithms, this thesis seeks to evolve agents with a higher skill level than those implemented in one such simulator, SabberStone. New benchmarks are propsed using supervised learning techniques to predict gameplay strategies from game data, and using unsupervised learning techniques to discover and visualize patterns that may be used in player modeling to differentiate gameplay strategies
Get Water!: Exploring the Adult Player’s Experience of a Mobile Game for Change
Research problem: Games with civic themes such as Get Water! are intended to raise awareness and promote participation in social movements. Evidence linking the features of such games to specific player outcomes, including affective, cognitive and behavioral indicators of learning, is limited. The purpose of this study is to contribute to the literature on game-based civic education by conducting an in-depth investigation of the experiences of adult players of a social change game for mobile devices, Get Water!.
Research questions: (1) How do players experience Get Water!? (2) How do players evaluate Get Water!? (3) Does playing Get Water! influence players’ attitudes, thoughts or actions related to the social issues it addresses?
Literature review: Previous research suggests that prior experience may influence how players interpret video games. Though some games with civic themes have been found to positively affect player attitudes and promote learning, the mechanisms are unclear. Design features of such games, especially content-mechanic integration, are likely to influence attitudinal, cognitive and behavioral learning outcomes of play by structuring the player experience.
Methodology: A qualitative case study approach is used to characterize the experiences and evaluations of 22 adults who played the game in a laboratory setting. Participant data was collected using a think-aloud procedure, post-play questionnaires, semi-structured interviews, and a one-month follow-up questionnaire. External data, including game design documents, were also examined. Descriptive and interpretive analyses were conducted to develop a detailed description of player-game interactions and player perceptions.
Findings and Conclusions: The data suggest that the participants’ evaluations of the game were informed by their personal experiences of the social issue depicted, and values with regards to teaching and learning. As such, their interpretations of the game’s content and perceived effectiveness varied greatly. Notably, the interpretations of players who had personally lived in regions where water scarcity exists interpreted the social messaging in unexpected ways. While the players largely enjoyed the game and viewed it positively, some indicated that the in-game activities were not sufficiently representative of the real-world scenario to afford a transformative educational experience. Misalignment between in-game objectives and real-world motives, and limited character and narrative development may account for the players’ experiences of low affective identification with the player character. Some players engaged in discussion and sought information about the subject matter in the month after the laboratory session. The findings and implications contribute to a conceptual understanding of how differences at the player level can influence how a social change game is experienced and evaluated. This suggests that social change game designers and education practitioners should prioritize representational verification efforts to better accommodate diversity in players’ prior knowledge
Recommended from our members
Natural Language Interfaces for Procedural Content Generation in Games
Mixed-Initiative Procedural Content Generation (MI-PCG) focuses on developing systems that allow users with diverse technical backgrounds to co-create interesting and novel game content in collaboration with a computational agent. These systems provide a front-end for users to interact with a generator by means of placing different constraints, or modifying a variety of the generator's parameters. While these systems provide significantly enhanced design support over traditional design tools, there exists areas of opportunity to address shortcomings in these systems such as high user interface complexity (too many controls presented, little feedback provided) and the lack of a model of designer intent (the system can reason over constraints but does not understand the expressive intent of the user). We believe that natural language interfaces can provide a way of addressing these areas by utilizing the expressiveness of natural language as an input for mixed-initiative systems in a way that it can reduce interface complexity by converting natural language queries into design space movements or constraints for the generator to act upon. By reducing all input to a single query the natural language interface can make the appropriate selection of parameters and controls that can result in the desired result for the user compared to the traditional modification of one control at a time in a traditional graphical user interface. Furthermore, the issue of designer intent can be addressed by creating a mapping of natural language concepts into a series of parameter combinations that allows for multi-dimensional movements in the design space of the generator, rather than manipulating a series of controls sequentially to achieve the same effect. In this thesis we explore the design and implementations of natural languages in MI-PCG systems through the development of a design methodology for encoding natural language understanding into MI-PCG systems and the implementation of two proof of concept systems named CADI and WATER4-NL for different use case scenarios such as automated game design and shader manipulation respectively. Furthermore a user study based evaluation of WATER4-NL and its results are presented and discussed
Assessing Influential Users in Live Streaming Social Networks
abstract: Live streaming has risen to significant popularity in the recent past and largely this live streaming is a feature of existing social networks like Facebook, Instagram, and Snapchat. However, there does exist at least one social network entirely devoted to live streaming, and specifically the live streaming of video games, Twitch. This social network is unique for a number of reasons, not least because of its hyper-focus on live content and this uniqueness has challenges for social media researchers.
Despite this uniqueness, almost no scientific work has been performed on this public social network. Thus, it is unclear what user interaction features present on other social networks exist on Twitch. Investigating the interactions between users and identifying which, if any, of the common user behaviors on social network exist on Twitch is an important step in understanding how Twitch fits in to the social media ecosystem. For example, there are users that have large followings on Twitch and amass a large number of viewers, but do those users exert influence over the behavior of other user the way that popular users on Twitter do?
This task, however, will not be trivial. The same hyper-focus on live content that makes Twitch unique in the social network space invalidates many of the traditional approaches to social network analysis. Thus, new algorithms and techniques must be developed in order to tap this data source. In this thesis, a novel algorithm for finding games whose releases have made a significant impact on the network is described as well as a novel algorithm for detecting and identifying influential players of games. In addition, the Twitch network is described in detail along with the data that was collected in order to power the two previously described algorithms.Dissertation/ThesisDoctoral Dissertation Computer Science 201
Computationally bounded rationality from three perspectives: precomputation, regret tradeoffs, and lifelong learning
What does it mean for a computer program to be optimal? Many fields in optimal decision making, from game theory to Bayesian decision theory, define optimal solutions which can be computationally intractable to implement or find. This is problematic, because it means that sometimes these solutions are not physically realizable. To address this problem, bounded rationality studies what it means to behave optimally subject to constraints on processing time, memory and knowledge. This thesis contributes three new models for studying bounded rationality in different contexts.
The first model considers games like chess. We suppose each player can spend some time before the game precomputing (memorizing) strong moves from an oracle, but has limited memory to remember these moves. We show how to analytically quantify how randomly optimal strategies play in equilibrium, and give polynomial- time algorithms for computing a best response and an ε-Nash equilibrium. We use the best response algorithm to empirically evaluate the chess playing program Stockfish.
The second model takes place in the setting of adversarial online learning. Here, we imagine an algorithm receives new problems online, and is given a computational budget to run B problem solvers for each problem. We show how to trade off the budget B for a strengthening of the algorithm’s regret guarantee in both the full and semi-bandit feedback settings. We then show how this tradeoff implies new results for Online Submodular Function Maximization (OSFM) (Streeter and Golovin, 2008) and Linear Programming. We use these observations to derive and benchmark a new algorithm for OSFM.
The third model approaches bounded rationality from the perspective of lifelong learning (Chen and Liu, 2018). Instead of modelling the final solution, lifelong learning models how a computationally bounded agent can accumulate knowledge over time and attempt to solve tractable subproblems it encounters. We develop models for incrementally accumulating and learning knowledge in a domain agnostic setting, and use these models to give an abstract framework for a lifelong reinforcement learner. The framework attempts to make a step towards making the best of analytical performance guarantees, while still being able to make use of black box techniques such as neural networks which may perform well in practice
Learning to Search in Reinforcement Learning
In this thesis, we investigate the use of search based algorithms with deep neural
networks to tackle a wide range of problems ranging from board games to video
games and beyond. Drawing inspiration from AlphaGo, the first computer program
to achieve superhuman performance in the game of Go, we developed a new algorithm AlphaZero. AlphaZero is a general reinforcement learning algorithm that
combines deep neural networks with a Monte Carlo Tree search for planning and
learning. Starting completely from scratch, without any prior human knowledge
beyond the basic rules of the game, AlphaZero managed to achieve superhuman
performance in Go, chess and shogi. Subsequently, building upon the success of AlphaZero, we investigated ways to extend our methods to problems in which the rules
are not known or cannot be hand-coded. This line of work led to the development
of MuZero, a model-based reinforcement learning agent that builds a deterministic
internal model of the world and uses it to construct plans in its imagination. We
applied our method to Go, chess, shogi and the classic Atari suite of video-games,
achieving superhuman performance. MuZero is the first RL algorithm to master
a variety of both canonical challenges for high performance planning and visually complex problems using the same principles. Finally, we describe Stochastic
MuZero, a general agent that extends the applicability of MuZero to highly stochastic environments. We show that our method achieves superhuman performance in
stochastic domains such as backgammon and the classic game of 2048 while matching the performance of MuZero in deterministic ones like Go