64 research outputs found

    Coevolutionary Approaches to Generating Robust Build-Orders for Real-Time Strategy Games

    Get PDF
    We aim to find winning build-orders for Real-Time Strategy games. Real-Time Strategy games provide a variety of challenges, from short-term control to longer term planning. We focus on a longer-term planning problem; which units to build and in what order to produce the units so a player successfully defeats the opponent. Plans which address unit construction scheduling problems in Real-Time Strategy games are called build-orders. A robust build-order defeats many opponents, while a strong build-order defeats opponents quickly. However, no single build-order defeats all other build-orders, and build-orders that defeat many opponents may still lose against a specific opponent. Other researchers have only investigated generating build-orders that defeat a specific opponent, rather than finding robust, strong build-orders. Additionally, previous research has not applied coevolutionary algorithms towards generating build-orders. In contrast, our research has three main contributions towards finding robust, strong build-orders. First, we apply a coevolutionary algorithm towards finding robust build-orders. Compared to exhaustive search, a genetic algorithm finds the strongest build-orders while a coevolutionary algorithm finds more robust build-orders. Second, we show that case-injection enables coevolution to learn from specific opponents while maintaining robustness. Build-orders produced with coevolution and case-injection learn to defeat or play like the injected build-orders. Third, we show that coevolved build-orders benefit from a representation which includes branches and loops. Coevolution will utilize multiple branches and loops to create build-orders that are stronger than build-orders without loops and branches. We believe this work provides evidence that coevolutionary algorithms may be a viable approach to creating robust, strong build-orders for Real-Time Strategy games

    Complementary Companion Behavior in Video Games

    Get PDF
    Companion characters in are present in many video games across genres, serving the role of the player\u27s partner. Their goal is to support the player\u27s strategy and to immerse the player by providing a believable companion. These companions often only perform rigidly scripted actions and fail to adapt to an individual player\u27s play-style, detracting from their usefulness. Behavior like this can also become frustrating to the player if the companions become more of a hindrance than they are a benefit. Other work, including this project\u27s precursor, focused on building companions that mimic the player. These strategies customize the companion\u27s actions to each player, but are limited. In the same context, an ideal companion would help further the player\u27s strategy by finding complementary actions rather than blind emulation. We propose a game-development framework that adds complementary (rather than mimicking) companions to a video game. For the purposes of this framework a complementary action is defined as any that furthers the player\u27s strategy both in the immediate future as well as in the long-term. This is determined through a combination of both player-action and game-state prediction processes, while allowing the companion to experiment with actions the player hasn\u27t tried. We used a new method to determine the location of companion actions based on a dynamic set of regions customized to the individual player. A user study of game-development students showed promising results, with a seventeen out of twenty-five participants reacting positively to the companion behavior, and nineteen saying that they would consider using the framework in future games

    Learning by observation using Qualitative Spatial Relations

    Get PDF
    We present an approach to the problem of learning by observation in spatially-situated tasks, whereby an agent learns to imitate the behaviour of an observed expert, with no direct interaction and limited observations. The form of knowledge representation used for these observations is crucial, and we apply Qualitative Spatial-Relational representations to compress continuous, metric state-spaces into symbolic states to maximise the generalisability of learned models and minimise knowledge engineering. Our system self-configures these representations of the world to discover configurations of features most relevant to the task, and thus build good predictive models. We then show how these models can be employed by situated agents to control their behaviour, closing the loop from observation to practical implementation. We evaluate our approach in the simulated RoboCup Soccer domain and the Real-Time Strategy game Starcraft, and successfully demonstrate how a system using our approach closely mimics the behaviour of both synthetic (AI controlled) players, and also human-controlled players through observation. We further evaluate our work in Reinforcement Learning tasks in these domains, and show that our approach improves the speed at which such models can be learned

    Player Expectations of Strategy Game AI

    Get PDF
    The behaviour of AI in modern strategy games is universally recognised as flawed. To compensate for this and successfully challenge humans, it must often be given significant advantages, such as luck bonuses, access to extra in-game resources or knowledge of the entire game state. Players often proclaim their dislike of these flaws, discussing nonsensical moves AI opponents have made, or the fact that the AI ‘cheats’ — out of necessity, as creating competent strategy game AI on consumer hardware is incredibly difficult, even with state-of-the-art techniques. Therefore, this thesis asks: what frustrates players about the opponents — human and AI — that they play against? By asking this, we can establish the most efficient ways to improve player experience when facing AI opponents. To answer, we explore the computer science that drives AI, the psychology that drives players, and the nature of game interactivity as a whole. Flaws in a range of popular strategy games were investigated, forming a grounded theory on how AI play typically annoys strategy game players. We find that players expect their opponents to conform to a set of expectations. Two scenarios were crafted for an existing strategy game. A mix of qualitative and quantitative methods were used to evaluate how players’ experience of one of those expectations — tension — changes under different, controlled conditions. We find that tension can be observed, and is connected to both player uncertainty and perceptions of power. In addition, analysis of player experiences allowed extraction of practical, concrete methods with which game developers can directly influence player experiences of tension in-game. A further experiment clarifies that investment is also connected to tension, but that it is more effective to phrase it as need when questioning players about their investment in a given objective. It also demonstrates that too little information given to players can remove the connection between perceived powers and tension. Finally, we connect our findings to the current literature on player experience in games, and highlight where further work needs to be done
    • 

    corecore