4 research outputs found

    Artificial Intelligence for Game Playing

    Get PDF
    Práce se zabývá metodami umělé inteligence aplikovanými pro hraní strategických her, ve kterých probíhá veškerá interakce v reálném čase (tzv. real-time strategic - RTS). V práci se zabývám zejména metodu strojového učení Q-learning založenou na zpětnovazebním učení a Markovovu rozhodovacím procesu. Praktická část práce je implementována pro hraní hry StarCraft: Brood War.Mnou navržené řešení, implementované v rámci pravidel soutěže SSCAIT, se učí sestavit optimální konstrukční pořadí budov dle hracího stylu oponenta. Analýza a vyhodnocení systému jsou provedeny srovnáním s ostatními účastníky soutěže a rovněž na základě sady odehraných her a porovnání počátečního chování s výsledným chováním natrénovaným právě na této sadě.The focus of this work is the use of artificial intelligence methods for a playing of real-time strategic (RTS) games, where all interactions of players are performed in real time (in parallel). The thesis deals mainly with the use of machine learning method Q-learning, which is based on reinforcement learning and Markov decision process. The practice part of this work is implemented for StarCraft: Brood War game.A proposed solution learns to make up an optimal order of buildings construction in respect to a playing style (strategy) of the opponent(s). The solution is proposed within the rules of the SSCAIT tournament. Analysis and evaluation of the proposed system are based on a comparison with other participants of the competition as well as a comparison of the system behavior before and after the playing of a set of the games.

    Learning Micro-Management Skills in RTS Games by Imitating Experts

    No full text
    We investigate the problem of learning the control of small groups of units in combat situations in Real Time Strategy (RTS) games. AI systems may acquire such skills by observing and learning from expert players, or other AI systems performing those tasks. However, access to training data may be limited, and representations based on metric information -- position, velocity, orientation etc. -- may be brittle, difficult for learning mechanisms to work with, and generalise poorly to new situations. In this work we apply \textit{qualitative spatial relations} to compress such continuous, metric state-spaces into symbolic states, and show that this makes the learning problem easier, and allows for more general models of behaviour. Models learnt from this representation are used to control situated agents, and imitate the observed behaviour of both synthetic (pre-programmed) agents, as well as the behaviour of human-controlled agents on a number of canonical micro-management tasks. We show how a Monte-Carlo method can be used to decompress qualitative data back in to quantitative data for practical use in our control system. We present our work applied to the popular RTS game Starcraft
    corecore