30 research outputs found

    CONCIENS: organizational awareness in real-time strategy games

    Get PDF
    The implementation of AI in commercial games is usually based on low level designs that makes the control predictable, unadaptive, and non reusable. Recent algorithms such as HTN or GOAP prove that higher levels of abstraction can be applied for better performance. We propose that approaches based on Organizational Theory can help providing a sound alternative for these implementations. In this paper we present CONCIENS, an integration of the ALIVE organizational framework into commercial games. We introduce a proof-of-concept implementation based on the integration to Warcraft III.Peer ReviewedPostprint (author’s final draft

    Agents for educational games and simulations

    Get PDF
    This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications

    Skilled Experience Catalogue: A Skill-Balancing Mechanism for Non-Player Characters using Reinforcement Learning

    Full text link
    In this paper, we introduce a skill-balancing mechanism for adversarial non-player characters (NPCs), called Skilled Experience Catalogue (SEC). The objective of this mechanism is to approximately match the skill level of an NPC to an opponent in real-time. We test the technique in the context of a First-Person Shooter (FPS) game. Specifically, the technique adjusts a reinforcement learning NPC's proficiency with a weapon based on its current performance against an opponent. Firstly, a catalogue of experience, in the form of stored learning policies, is built up by playing a series of training games. Once the NPC has been sufficiently trained, the catalogue acts as a timeline of experience with incremental knowledge milestones in the form of stored learning policies. If the NPC is performing poorly, it can jump to a later stage in the learning timeline to be equipped with more informed decision-making. Likewise, if it is performing significantly better than the opponent, it will jump to an earlier stage. The NPC continues to learn in real-time using reinforcement learning but its policy is adjusted, as required, by loading the most suitable milestones for the current circumstances.Comment: IEEE Conference on Computational Intelligence and Games (CIG). August 201

    Artificial intelligence approaches for the generation and assessment of believable human-like behaviour in virtual characters

    Get PDF
    Having artificial agents to autonomously produce human-like behaviour is one of the most ambitious original goals of Artificial Intelligence (AI) and remains an open problem nowadays. The imitation game originally proposed by Turing constitute a very effective method to prove the indistinguishability of an artificial agent. The behaviour of an agent is said to be indistinguishable from that of a human when observers (the so-called judges in the Turing test) can not tell apart humans and non-human agents. Different environments, testing protocols, scopes and problem domains can be established to develop limited versions or variants of the original Turing test. In this paper we use a specific version of the Turing test, based on the international BotPrize competition, built in a First-Person Shooter video game, where both human players and non-player characters interact in complex virtual environments. Based on our past experience both in the BotPrize competition and other robotics and computer game AI applications we have developed three new more advanced controllers for believable agents: two based on a combination of the CERA-CRANIUM and SOAR cognitive architectures and other based on ADANN, a system for the automatic evolution and adaptation of artificial neural networks. These two new agents have been put to the test jointly with CCBot3, the winner of BotPrize 2010 competition [1], and have showed a significant improvement in the humanness ratio. Additionally, we have confronted all these bots to both First-person believability assessment (BotPrize original judging protocol) and Third-person believability assess- ment, demonstrating that the active involvement of the judge has a great impact in the recognition of human-like behaviour.MICINN -Ministerio de Ciencia e Innovación(FCT-13-7848

    Building quests for online games with virtual institutions

    Get PDF
    Abstract. This document describes how to re-purpose an existing agent technology called Virtual Institutions as a mechanism to define new "quest" elements in Massively Multiplayer Online Games based on MultiAgent Systems. Quests are a very important part of most Massive Online Games as they wield to flow and narrative of the game in a linear or nonlinear manner

    Evolving Agents using NEAT to Achieve Human-Like Play in FPS Games

    Get PDF
    Artificial agents are commonly used in games to simulate human opponents. This allows players to enjoy games without requiring them to play online or with other players locally. Basic approaches tend to suffer from being unable to adapt strategies and often perform tasks in ways very few human players could ever achieve. This detracts from the immersion or realism of the gameplay. In order to achieve more human-like play more advanced approaches are employed in order to either adapt to the player's ability level or to cause the agent to play more like a human player can or would. Utilizing artificial neural networks evolved using the NEAT methodology, we attempt to produce agents to play a FPS-style game. The goal is to see if the approach produces well-playing agents with potentially human-like behaviors. We provide a large number of sensors and motors to the neural networks of a small population learning through co-evolution. Ultimately we find that the approach has limitations and is generally too slow for practical application, but holds promise for future developments. Many extensions are presented which could improve the results and reduce training times. The agents learned to perform some basic tasks at a very rough level of skill, but were not competitive at even a beginner level
    corecore