4 research outputs found

    Training Intelligent Agents in Unity Game Engine

    Get PDF
    Cílem práce je navrhnout aplikace, které demonstrují sílu strojového učení pro tvorbu umělé inteligence ve videohrách. K řešení této problematiky je použita sada nástrojů ML-Agents, která umožňuje tvorbu inteligentních agentů v enginu Unity. Jednotlivé demonstrační aplikace jsou zaměřeny na různé scénáře využití této sady. Pro trénování je použito zpětnovazební a imitační učení.The goal of this work is to design applications, which demonstrate the power of machine learning in video games. To achieve this goal, this work uses the ML-Agents toolkit, which allows the creation of intelligent agents in the Unity Game Engine. Furthermore, a series of experiments showing the properties and flexibility of intelligent agents in several real-time scenarios is presented. To train the agents, the toolkit uses reinforcement learning and imitation learning algorithms.

    Believability Assessment and Modelling in Video Games

    Get PDF
    Artificial Intelligence remains one of the most sought after subjects in computer science to this day. One of its subjects, and the focus of this thesis, is its application to video games as believable agents. This means focusing on implementing agents that behave like us rather than simply attempting to win, whether that means cooperating or competing like we do. Success in building more human-like characters can enhance immersion and enjoyment in games, thus potentially increasing its gameplay value. Ultimately, bringing benefits to the industry and academia. However, believability is a hard concept to define. It depends on how and what one considers to be ``believable'', which is often very subjective. This means that developing believable agents remains a sought out, albeit difficult, challenge. There are many approaches to development ranging from finite state machines or imitation learning to emotional models, with no single solution to creating a human-like agent. This problems remains when attempting to assess these solutions as well. Assessing the believability of agents, characters and simulated actors is also a core challenge for human-like behaviour. While numerous approaches are suggested in the literature, there is not a dominant solution for evaluation either. In addition, assessment rarely receives as much attention as development or modelling do. Mostly, it comes as a necessity of evaluating agents rather than focusing on how its process could affect the outcome of the evaluation itself. This thesis takes a different approach to developing believability and its assessment. For starters, it explores assessment first. In previous years, several researchers have tried to find ways of assessing human-like behaviour in games through adaptations of Turing Tests on their agents. Given the small pool of diversity of the explored parameters in believability assessment and a focus on programming the bots, this thesis starts by exploring different parameters for evaluating believability in video games. The objective of this work is to analyze the different ways believability can be assessed, for humans and non-player characters (NPCs) by comparing how results between them and scores are affected in both when changing the parameters. This thesis also explores the concept of believability and its need in video games in general. Another aspect of assessment explored in this thesis is believability's overall representation. Past research shows methodologies being limited to discrete and low-granularity representations of believable behaviour. This work will focus, for the first time, in viewing believability as a time-continuous phenomenon and explore the suitability of two different affect annotation schemes for its assessment. These techniques are also compared to previously used discrete methodologies, to understand how moment-to-moment assessment can contribute to these. In addition, this thesis studies the degree to which we can predict character believability in a continuous fashion. This is achieved by training random forest models to predict believability based on annotations of the context extracted from a game. It is then that this thesis tackles development. For this work, different solutions are combined into one and in a different order: this time-continuous data based on peoples' assessment of believability is modelled and integrated into a game agent to affect its behaviour. This results in a final comparison between two agents, where one uses a believability biased model and the other does not. Showing that biasing agents' behaviour with assessment data can increase their overall believability
    corecore