4,100 research outputs found
Learning the patterns of balance in a multi-player shooter game
A particular challenge of the game design process is when the designer is requested to orchestrate dissimilar elements of games such
as visuals, audio, narrative and rules to achieve a specic play experience. Within the domain of adversarial rst person shooter games,
for instance, a designer must be able to comprehend the dierences
between the weapons available in the game, and appropriately cra
a game level to take advantage of strengths and weaknesses of those
weapons. As an initial study towards computationally orchestrating dissimilar content generators in games, this paper presents a
computational model which can classify a matchup of a team-based
shooter game as balanced or as favoring one or the other team. e
computational model uses convolutional neural networks to learn
how game balance is aected by the level, represented as an image,
and each teamâs weapon parameters. e model was trained on a
corpus of over 50,000 simulated games with articial agents on a
diverse set of levels created by 39 dierent generators. e results
show that the fusion of levels, when processed by a convolutional
neural network, and weapon parameters yields an accuracy far
above the baseline but also improves accuracy compared to articial neural networks or models which use partial information, such
as only the weapon or only the level as input.peer-reviewe
Pairing character classes in a deathmatch shooter game via a deep-learning surrogate model
This paper introduces a surrogate model of gameplay that learns the mapping between different game facets, and applies it to a generative system which designs new content in one of these facets. Focusing on the shooter game genre, the paper explores how deep learning can help build a model which combines the game level structure and the game's character class parameters as input and the gameplay outcomes as output. The model is trained on a large corpus of game data from simulations with artificial agents in random sets of levels and class parameters. The model is then used to generate classes for specific levels and for a desired game outcome, such as balanced matches of short duration. Findings in this paper show that the system can be expressive and can generate classes for both computer generated and human authored levels.peer-reviewe
A framework for quantitative analysis of user-generated spatial data
This paper proposes a new framework for automated
analysis of game-play metrics for aiding game designers
in finding out the critical aspects of the game caused
by factors like design modications, change in playing
style, etc. The core of the algorithm measures similarity
between spatial distribution of user generated in-game
events and automatically ranks them in order of importance. The feasibility of the method is demonstrated on
a data set collected from a modern, multiplayer First
Person Shooter, together with application examples of
its use. The proposed framework can be used to accompany traditional testing tools and make the game design
process more efficient
The design-by-adaptation approach to universal access: learning from videogame technology
This paper proposes an alternative approach to the design of universally accessible interfaces to that provided by formal design frameworks applied ab initio to the development of new software. This approach, design-byadaptation, involves the transfer of interface technology and/or design principles from one application domain to another, in situations where the recipient domain is similar to the host domain in terms of modelled systems, tasks and users. Using the example of interaction in 3D virtual environments, the paper explores how principles underlying the design of videogame interfaces may be applied to a broad family of visualization and analysis software which handles geographical data (virtual geographic environments, or VGEs). One of the motivations behind the current study is that VGE technology lags some way behind videogame technology in the modelling of 3D environments, and has a less-developed track record in providing the variety of interaction methods needed to undertake varied tasks in 3D virtual worlds by users with varied levels of experience. The current analysis extracted a set of interaction principles from videogames which were used to devise a set of 3D task interfaces that have been implemented in a prototype VGE for formal evaluation
The Effects of Displayed Violence and Game Speed in First-Person Shooters on Physiological Arousal and Aggressive Behavior
Many studies have been conducted to examine the effects of displayed violence in digital games on outcomes like aggressive behavior and physiological arousal. However, they often lack a proper manipulation of the relevant factors and control of confounding variables.
In this study, the displayed violence and game speed of a recent first-person shooter game were varied systematically using the technique of modding, so that effects could be explained properly by the respective manipulations. Aggressive behavior was measured with the standardized version of the Competitive Reaction Time Task or CRTT (Ferguson et al., 2008}. Physiological arousal was operationalized with four measurements: galvanic skin response (GSR), heart rate (HR), body movement, force on mouse and keyboard.
A total of N = 87 participants played in one of four game conditions (low- vs. high-violence, normal- vs. high speed) while physiological measurements were taken with finger clips, force sensors on input devices (mouse and keyboard), and a Nintendo Wii balance board on the chair they sat on. After play, their aggressive behavior was measured with the CRTT.
The results of the study do not support the hypothesis that playing digital games increases aggressive behavior. There were no significant differences in GSR and HR, but with a higher game speed, participants showed less overall body movement, most likely to meet the gameâs higher demands on cognitive and motor capacities. Also, higher game speed and displayed violence caused an increase in applied force on mouse and keyboard. Previous experience with digital games did not moderate any of these findings. Moreover, it provides further evidence that the CRTT should only be used in a standardized way as a measurement for aggression, if at all. Using all 7 different published (though not validated) ways to calculate levels of aggression from the raw data, âevidenceâ was found that playing a violent digital game increases, decreases, or does not change aggression at all.
Thus, the present study does extend previous research. Firstly, it shows the methodological advantages of modding in digital game research to accomplish the principles of psychological (laboratory) experiments by manipulating relevant variables and controlling all others. It also demonstrates the test-theoretical problems of the highly diverse use of the CRTT. It provides evidence that for a meaningful interpretation of effects of displayed violence in digital games, there are other game characteristics that should be controlled for since they might have an effect on relevant outcome variables. Further research needs to identify more of those game features, and it should also improve the understanding of the different measures for physiological arousal and their interrelatedness
Games and Brain-Computer Interfaces: The State of the Art
BCI gaming is a very young field; most games are proof-of-concepts. Work that compares BCIs in a game environments with traditional BCIs indicates no negative effects, or even a positive effect of the rich visual environments on the performance. The low transfer-rate of current games poses a problem for control of a game. This is often solved by changing the goal of the game. Multi-modal input with BCI forms an promising solution, as does assigning more meaningful functionality to BCI control
Adaptive Agent Architecture for Real-time Human-Agent Teaming
Teamwork is a set of interrelated reasoning, actions and behaviors of team
members that facilitate common objectives. Teamwork theory and experiments have
resulted in a set of states and processes for team effectiveness in both
human-human and agent-agent teams. However, human-agent teaming is less well
studied because it is so new and involves asymmetry in policy and intent not
present in human teams. To optimize team performance in human-agent teaming, it
is critical that agents infer human intent and adapt their polices for smooth
coordination. Most literature in human-agent teaming builds agents referencing
a learned human model. Though these agents are guaranteed to perform well with
the learned model, they lay heavy assumptions on human policy such as
optimality and consistency, which is unlikely in many real-world scenarios. In
this paper, we propose a novel adaptive agent architecture in human-model-free
setting on a two-player cooperative game, namely Team Space Fortress (TSF).
Previous human-human team research have shown complementary policies in TSF
game and diversity in human players' skill, which encourages us to relax the
assumptions on human policy. Therefore, we discard learning human models from
human data, and instead use an adaptation strategy on a pre-trained library of
exemplar policies composed of RL algorithms or rule-based methods with minimal
assumptions of human behavior. The adaptation strategy relies on a novel
similarity metric to infer human policy and then selects the most complementary
policy in our library to maximize the team performance. The adaptive agent
architecture can be deployed in real-time and generalize to any off-the-shelf
static agents. We conducted human-agent experiments to evaluate the proposed
adaptive agent framework, and demonstrated the suboptimality, diversity, and
adaptability of human policies in human-agent teams.Comment: The first three authors contributed equally. In AAAI 2021 Workshop on
Plan, Activity, and Intent Recognitio
From pixels to affect : a study on games and player experience
Is it possible to predict the affect of a user just
by observing her behavioral interaction through a video? How
can we, for instance, predict a userâs arousal in games by
merely looking at the screen during play? In this paper we
address these questions by employing three dissimilar deep
convolutional neural network architectures in our attempt to
learn the underlying mapping between video streams of gameplay
and the playerâs arousal. We test the algorithms in an annotated
dataset of 50 gameplay videos of a survival shooter game and
evaluate the deep learned modelsâ capacity to classify high vs low
arousal levels. Our key findings with the demanding leave-onevideo-
out validation method reveal accuracies of over 78% on
average and 98% at best. While this study focuses on games and
player experience as a test domain, the findings and methodology
are directly relevant to any affective computing area, introducing
a general and user-agnostic approach for modeling affect.This paper is funded, in part, by the H2020 project Com N Play Science (project no: 787476).peer-reviewe
- âŠ