Online benchmark environment for multi-agent reinforcement learning

Abstract

Zmožnost delovanja (in zmagovanja) v igrah se pri umetni inteligenci pogosto uporablja kot pokazatelj oz. merilo splošnejše sposobnosti. S stopnjevanjem izzivov pa so zaradi tehničnih ovir odmevni podvigi primorani sklepati kompromise - vmesniki simulacijskih okolij so lahko za umetne agente neskladno prirejeni, kar vzbuja negotovosti v primerjavah z ljudmi. Pregled izbranih del na področju globokega spodbujevalnega učenja v realnočasnih strateških igrah poudarja potrebo po novem merilnem okolju, ki z omogočanjem enakovrednejših vmesnikov bolje izpostavlja vlogo strateških elementov in je hkrati primerno za poskuse na porazdeljenih sistemih. Slednje je izvedeno kot skupinska tekmovalna igra, v opisu katere se obravnavajo določeni tehnični in teoretični problemi na primerih posnemovalnega in spodbujevalnega učenja.Capability of acting (and winning) in games is often used in artifcial intelligence as an indicator or measure of more general ability. However, as challenges escalate, notable efforts are forced to compromise due to technical limitations - interfaces of simulated environments can be inconsistently adapted for artifcial agents, which induces uncertainty in comparisons with humans. Review of select works in the feld of deep reinforcement learning in real-time strategy games highlights necessity for a new benchmark environment, which better emphasises the role of strategic elements by enabling more equivalent interfaces and is also suitable for experiments on distributed systems. The latter is realised as a team-based competitive game, in description of which specifc technical and theoretical problems are examined on the cases of imitation and reinforcement learning

    Similar works