709 research outputs found

    Measuring Control to Dynamically Induce Flow in Tetris.

    Get PDF
    Dynamic Difficulty Adjustment (DDA) is a set of techniques that aim to automatically adapt the difficulty of a video game based on the player’s performance. This paper presents a methodology for DDA using ideas from the theory of flow and case-based reasoning (CBR). In essence we are looking to generate game sessions with a similar difficulty evolution to previous game sessions that have produced flow in players with a similar skill level. We propose a CBR approach to dynamically assess the player’s skill level and adapt the difficulty of the game based on the relative complexity of the last game states. We develop a DDA system for Tetris using this methodology and show, in a experiment with 40 participants, that the DDA version has a measurable impact on the perceived flow using validated questionnaires.pre-print456 K

    A Software Design Pattern Based Approach to Auto Dynamic Difficulty in Video Games

    Get PDF
    From the point of view of skill levels, reflex speeds, hand-eye coordination, tolerance for frustration, and motivations, video game players may vary drastically. Auto dynamic difficulty (ADD) in video games refers to the technique of automatically adjusting different aspects of a video game in real time, based on the player’s ability and emergence factors in order to provide the optimal experience to users from such a large demography and increase replay value. In this thesis, we describe a collection of software design patterns for enabling auto dynamic difficulty in video games. We also discuss the benefits of a design pattern based approach in terms of software quality factors and process improvements based on our experience of applying it in three different video games. Additionally, we present a semi-automatic framework to assist in applying our design pattern based approach in video games. Finally, we conducted a preliminary user study where a Post-Degree Diploma student at the University of Western Ontario applied the design pattern based approach to create ADD in two arcade style games

    Evolution of Flow in Games

    Get PDF
    Every one wants to play a fun game, but ”fun” is a subjective quality. Flow, a psychological theory to define what ”fun” is, states that, for an activity to be considered fun, the chal-lenge it presents must correlate with that participant’s abilities such that the activity is neither too easy or too difficult. One of the biggest problems for game designers is balancing the difficulty of its content in such a way that it appeals to the largest audience possible. In order to broaden audiences, de-velopers need to invest effort into creating numerous, discrete balances that are aligned to varying difficulty normals. Even then, these discrete categories never exactly match more than a few people’s abilities. Previous research has created systems to adjust online, chang-ing the difficulty the system throws at a player as the he or she plays the game. Creators of these systems often state that more complex evolutionary methods, like genetic algorithms, cannot be viable for such online learning due to lacking effi-ciency and effectiveness. However, newer techniques like the use of generative grammatical encodings have been shown to break such previous stereotypes of non-efficiency, creating the possibility that they might be now be a viable option. In my research, I implement a game system that uses an inter-active genetic algorithm, further using generative grammati-cal encodings, as a proof of concept that such a system can noticeably balance a game’s difficulty online, to any given player. This effect is backed up with test results from the field as to how players felt it adjusted to them

    Bayesian learning of noisy Markov decision processes

    Full text link
    We consider the inverse reinforcement learning problem, that is, the problem of learning from, and then predicting or mimicking a controller based on state/action data. We propose a statistical model for such data, derived from the structure of a Markov decision process. Adopting a Bayesian approach to inference, we show how latent variables of the model can be estimated, and how predictions about actions can be made, in a unified framework. A new Markov chain Monte Carlo (MCMC) sampler is devised for simulation from the posterior distribution. This step includes a parameter expansion step, which is shown to be essential for good convergence properties of the MCMC sampler. As an illustration, the method is applied to learning a human controller

    How Fast Can We Play Tetris Greedily With Rectangular Pieces?

    Get PDF
    Consider a variant of Tetris played on a board of width ww and infinite height, where the pieces are axis-aligned rectangles of arbitrary integer dimensions, the pieces can only be moved before letting them drop, and a row does not disappear once it is full. Suppose we want to follow a greedy strategy: let each rectangle fall where it will end up the lowest given the current state of the board. To do so, we want a data structure which can always suggest a greedy move. In other words, we want a data structure which maintains a set of O(n)O(n) rectangles, supports queries which return where to drop the rectangle, and updates which insert a rectangle dropped at a certain position and return the height of the highest point in the updated set of rectangles. We show via a reduction to the Multiphase problem [P\u{a}tra\c{s}cu, 2010] that on a board of width w=Θ(n)w=\Theta(n), if the OMv conjecture [Henzinger et al., 2015] is true, then both operations cannot be supported in time O(n1/2ϵ)O(n^{1/2-\epsilon}) simultaneously. The reduction also implies polynomial bounds from the 3-SUM conjecture and the APSP conjecture. On the other hand, we show that there is a data structure supporting both operations in O(n1/2log3/2n)O(n^{1/2}\log^{3/2}n) time on boards of width nO(1)n^{O(1)}, matching the lower bound up to a no(1)n^{o(1)} factor.Comment: Correction of typos and other minor correction

    Applying quantitative models to evaluate complexity in video game systems

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 41).This thesis proposes a games evaluation model that reports significant statistics about the complexity of a game's various systems. Quantitative complexity measurements allow designers to make accurate decisions about how to manage challenge, keeping in mind the player's physical and mental resources and the amount/type of actions the game requires players to act upon. Managing the operational challenge is critical to keeping players in a state of enjoyment, the primary purpose of video games. This thesis first investigates the relationship between enjoyment and complexity through the concept of Flow. From there it examines the properties of GOMS that are useful to analyzing videogames using Tetris as a case study, and then it examines and dissects the shortcomings of a direct usability approach and offers solutions based on a strategy game example. A third case study of the idle worker scenario in strategy games is detailed to further corroborate the usefulness of applying a GOMS based analysis to videogames. Using quantitative measurements of complexity, future research can aggressively tackle difficulty and challenge precisely, mitigate complexity to widen market appeal, and even reveal new genre possibilities.by Matthew Tanwanteng.M.Eng

    Using Ant Colonization Optimization to Control Difficulty in Video Game AI.

    Get PDF
    Ant colony optimization (ACO) is an algorithm which simulates ant foraging behavior. When ants search for food they leave pheromone trails to tell other ants which paths to take to find food. ACO has been adapted to many different problems in computer science: mainly variations on shortest path algorithms for graphs and networks. ACO can be adapted to work as a form of communication between separate agents in a video game AI. By controlling the effectiveness of this communication, the difficulty of the game should be able to be controlled. Experimentation has shown that ACO works effectively as a form of communication between agents and supports that ACO is an effective form of difficulty control. However, further experimentation is needed to definitively show that ACO is effective at controlling difficulty and to show that it will also work in a large scale system

    Dynamic Threshold Selection for a Biocybernetic Loop in an Adaptive Video Game Context

    Get PDF
    Passive Brain-Computer interfaces (pBCIs) are a human-computer communication tool where the computer can detect from neurophysiological signals the current mental or emotional state of the user. The system can then adjust itself to guide the user toward a desired state. One challenge facing developers of pBCIs is that the system's parameters are generally set at the onset of the interaction and remain stable throughout, not adapting to potential changes over time such as fatigue. The goal of this paper is to investigate the improvement of pBCIs with settings adjusted according to the information provided by a second neurophysiological signal. With the use of a second signal, making the system a hybrid pBCI, those parameters can be continuously adjusted with dynamic thresholding to respond to variations such as fatigue or learning. In this experiment, we hypothesize that the adaptive system with dynamic thresholding will improve perceived game experience and objective game performance compared to two other conditions: an adaptive system with single primary signal biocybernetic loop and a control non-adaptive game. A within-subject experiment was conducted with 16 participants using three versions of the game Tetris. Each participant plays 15 min of Tetris under three experimental conditions. The control condition is the traditional game of Tetris with a progressive increase in speed. The second condition is a cognitive load only biocybernetic loop with the parameters presented in Ewing et al. (2016). The third condition is our proposed biocybernetic loop using dynamic threshold selection. Electroencephalography was used as the primary signal and automatic facial expression analysis as the secondary signal. Our results show that, contrary to our expectations, the adaptive systems did not improve the participants' experience as participants had more negative affect from the BCI conditions than in the control condition. We endeavored to develop a system that improved upon the authentic version of the Tetris game, however, our proposed adaptive system neither improved players' perceived experience, nor their objective performance. Nevertheless, this experience can inform developers of hybrid passive BCIs on a novel way to employ various neurophysiological features simultaneously

    Emotion Assessment From Physiological Signals for Adaptation of Game Difficulty

    Full text link
    corecore