75 research outputs found
A human-like TORCS controller for the Simulated Car Racing Championship
Proceeding of: IEEE Congres on Computational Intelligence and Games (CIG'10), Copenhagen (Denmark), 18-21, August, 2010.This paper presents a controller for the 2010 Simulated Car Racing Championship. The idea is not to create the fastest controller but a human-like controller. In order to achieve this, first we have created a process to build a model of the tracks while the car is running and then we used several neural networks which predict the trajectory the car should follow and the target speed. A scripted policy is used for the gear change and to follow the predicted trajectory with the predicted speed. The neural networks are trained with data retrieved from a human player, and are evaluated in a new track. The results shows an acceptable performance of the controller in unknown tracks, more than 20% slower than the human in the same tracks because of the mistakes made when the controller tries to follow the trajectory.This work was supported in part by the University Carlos III of Madrid under grant PIF UC3M01-0809 and by the Ministry of Science and Innovation under project TRA2007-
67374-C02-02
Player Modeling
Player modeling is the study of computational models of players in games. This includes the detection, modeling, prediction and expression of human player characteristics which are manifested
through cognitive, affective and behavioral patterns. This chapter introduces a holistic view of player modeling and provides a high level taxonomy and discussion of the key components of a player\u27s model. The discussion focuses on a taxonomy of approaches for constructing a player model, the available types of data for the model\u27s input and a proposed classification for the model\u27s output. The chapter provides also a brief overview of some promising applications and a discussion of the key challenges player modeling is currently facing which are linked to the input, the output and the computational model
Assessment in and of serious games: an overview
There is a consensus that serious games have a significant potential as a tool for instruction. However, their effectiveness in terms of learning outcomes is still understudied mainly due to the complexity involved in assessing intangible measures. A systematic approach—based on established principles and guidelines—is necessary to enhance the design of serious games, and many studies lack a rigorous assessment. An important aspect in the evaluation of serious games, like other educational tools, is user performance assessment. This is an important area of exploration because serious games are intended to evaluate the learning progress as well as the outcomes. This also emphasizes the importance of providing appropriate feedback to the player. Moreover, performance assessment enables adaptivity and personalization to meet individual needs in various aspects, such as learning styles, information provision rates, feedback, and so forth. This paper first reviews related literature regarding the educational effectiveness of serious games. It then discusses how to assess the learning impact of serious games and methods for competence and skill assessment. Finally, it suggests two major directions for future research: characterization of the player's activity and better integration of assessment in games
An Empirical Study on Collective Intelligence Algorithms for Video Games Problem-Solving
Computational intelligence (CI), such as evolutionary computation or swarm intelligence methods, is a set of bio-inspired algorithms that have been widely used to solve problems in areas like planning, scheduling or constraint satisfaction problems. Constrained satisfaction problems (CSP) have taken an important attention from the research community due to their applicability to real problems. Any CSP problem is usually modelled as a constrained graph where the edges represent a set of restrictions that must be verified by the variables (represented as nodes in the graph) which will define the solution of the problem. This paper studies the performance of two particular CI algorithms, ant colony optimization (ACO) and genetic algorithms (GA), when dealing with graph-constrained models in video games problems. As an application domain, the "Lemmings" video game has been selected, where a set of lemmings must reach the exit point of each level. In order to do that, each level is represented as a graph where the edges store the allowed movements inside the world. The goal of the algorithms is to assign the best skills in each position on a particular level, to guide the lemmings to reach the exit. The paper describes how the ACO and GA algorithms have been modelled and applied to the selected video game. Finally, a complete experimental comparison between both algorithms, based on the number of solutions found and the levels solved, is analysed to study the behaviour of those algorithms in the proposed domain
Study of Computational Intelligence Algorithms to Detect Behaviour Patterns
In order to achieve the game flow and increase player retention, it is important that games
difficulty matches player skills. As a consequence, to evaluate how people play a game
is a crucial component, because detecting gamers strategies in video-games, it is possible
to fix the game difficulty. The main problem to detect the strategies is whether attributes
selected to define the strategies correctly detect the actions of the player. To study the
player strategies, we will use a Real Time Stategy (RTS) game. In a RTS the players make
use of units and structures to secure areas of a map and/or destroy the opponents resources.
In this work, we will extract the real-time information about the players strategies using
a platform base on the RTS game. After gathering information, the attributes that define
the player strategies are evaluated using unsupervised learning algorithm (K-Means and
Spectral Clustering). Finally, we will study the similitude among several gameplays where
players use different strategies.A fin de lograr que el flujo del juego mejore y la captación de jugadores aumente, es importante
que la dificultad del juego se ajuste a las habilidades del jugador. Como consecuencia,
evaluar como juega la gente un juego es un aspecto importante, porque detectando las estrategias
de los jugadores en los vídeo juegos, permite adapta la dificultad del juego. El
problema principal para detectar las estrategias es si los atributos seleccionados para definir
las estrategias definen correctamente las acciones del jugador. Para estudiar las estrategias
de los jugadores, usaremos un juego de estrategia en tiempo real (Reat Time Strategy (RTS)
en inglés). En un RTS los jugadores hacen uso de unidades y estructuras para asegurar áreas
del mapa y/o destruir los recursos de los oponentes. En este trabajo, extraeremos información
en tiempo real acerca de las estrategias usando una plataforma basada en un juego
de RTS. Después de recoger la información, los atributos que definen las estrategias de los
jugadores son evaluados mediante algoritmos de aprendizaje no supervisado (K-Means y
Spectral Clustering). Finalmente, estudiaremos la similitud entre diversas partidas donde
los jugadores utilizar diferentes estrategias.Este trabajo ha sido financiado por Airbus Defence & Space (Proyecto Savier: FUAM-076914) y parcialmente por TIN2010-19872
ChatGPT and Other Large Language Models as Evolutionary Engines for Online Interactive Collaborative Game Design
Large language models (LLMs) have taken the scientific world by storm,
changing the landscape of natural language processing and human-computer
interaction. These powerful tools can answer complex questions and,
surprisingly, perform challenging creative tasks (e.g., generate code and
applications to solve problems, write stories, pieces of music, etc.). In this
paper, we present a collaborative game design framework that combines
interactive evolution and large language models to simulate the typical human
design process. We use the former to exploit users' feedback for selecting the
most promising ideas and large language models for a very complex creative task
- the recombination and variation of ideas. In our framework, the process
starts with a brief and a set of candidate designs, either generated using a
language model or proposed by the users. Next, users collaborate on the design
process by providing feedback to an interactive genetic algorithm that selects,
recombines, and mutates the most promising designs. We evaluated our framework
on three game design tasks with human designers who collaborated remotely.Comment: (Submitted
How Fast Can We Play Tetris Greedily With Rectangular Pieces?
Consider a variant of Tetris played on a board of width and infinite
height, where the pieces are axis-aligned rectangles of arbitrary integer
dimensions, the pieces can only be moved before letting them drop, and a row
does not disappear once it is full. Suppose we want to follow a greedy
strategy: let each rectangle fall where it will end up the lowest given the
current state of the board. To do so, we want a data structure which can always
suggest a greedy move. In other words, we want a data structure which maintains
a set of rectangles, supports queries which return where to drop the
rectangle, and updates which insert a rectangle dropped at a certain position
and return the height of the highest point in the updated set of rectangles. We
show via a reduction to the Multiphase problem [P\u{a}tra\c{s}cu, 2010] that on
a board of width , if the OMv conjecture [Henzinger et al., 2015]
is true, then both operations cannot be supported in time
simultaneously. The reduction also implies polynomial bounds from the 3-SUM
conjecture and the APSP conjecture. On the other hand, we show that there is a
data structure supporting both operations in time on
boards of width , matching the lower bound up to a factor.Comment: Correction of typos and other minor correction
- …