253 research outputs found

    Affect and believability in game characters:a review of the use of affective computing in games

    Get PDF
    Virtual agents are important in many digital environments. Designing a character that highly engages users in terms of interaction is an intricate task constrained by many requirements. One aspect that has gained more attention recently is the effective dimension of the agent. Several studies have addressed the possibility of developing an affect-aware system for a better user experience. Particularly in games, including emotional and social features in NPCs adds depth to the characters, enriches interaction possibilities, and combined with the basic level of competence, creates a more appealing game. Design requirements for emotionally intelligent NPCs differ from general autonomous agents with the main goal being a stronger player-agent relationship as opposed to problem solving and goal assessment. Nevertheless, deploying an affective module into NPCs adds to the complexity of the architecture and constraints. In addition, using such composite NPC in games seems beyond current technology, despite some brave attempts. However, a MARPO-type modular architecture would seem a useful starting point for adding emotions

    Agents for educational games and simulations

    Get PDF
    This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications

    The Challenge of Believability in Video Games: Definitions, Agents Models and Imitation Learning

    Full text link
    In this paper, we address the problem of creating believable agents (virtual characters) in video games. We consider only one meaning of believability, ``giving the feeling of being controlled by a player'', and outline the problem of its evaluation. We present several models for agents in games which can produce believable behaviours, both from industry and research. For high level of believability, learning and especially imitation learning seems to be the way to go. We make a quick overview of different approaches to make video games' agents learn from players. To conclude we propose a two-step method to develop new models for believable agents. First we must find the criteria for believability for our application and define an evaluation method. Then the model and the learning algorithm can be designed

    Improving Behavior of Computer Game Bots Using Fictitious Play

    Get PDF
    In modern computer games, "bots" -intelligent realistic agents play a prominent role in the popularity of a game in the market. Typically, bots are modeled using finite-state machine and then programmed via simple conditional statements which are hard-coded in bots logic. Since these bots have become quite predictable to an experienced games player, a player might lose interest in the game. We propose the use of a game theoretic based learning rule called fictitious play for improving behavior of these computer game bots which will make them less predictable and hence, more a enjoyable game

    Improving Computer Game Bots\u27 behavior using Q-Learning

    Get PDF
    In modern computer video games, the quality of artificial characters plays a prominent role in the success of the game in the market. The aim of intelligent techniques, termed game AI, used in these games is to provide an interesting and challenging game play to a game player. Being highly sophisticated, these games present game developers with similar kind of requirements and challenges as faced by academic AI community. The game companies claim to use sophisticated game AI to model artificial characters such as computer game bots, intelligent realistic AI agents. However, these bots work via simple routines pre-programmed to suit the game map, game rules, game type, and other parameters unique to each game. Mostly, illusive intelligent behaviors are programmed using simple conditional statements and are hard-coded in the bots\u27 logic. Moreover, a game programmer has to spend considerable time configuring crisp inputs for these conditional statements. Therefore, we realize a need for machine learning techniques to dynamically improve bots\u27 behavior and save precious computer programmers\u27 man-hours. So, we selected Q-learning, a reinforcement learning technique, to evolve dynamic intelligent bots, as it is a simple, efficient, and online learning algorithm. Machine learning techniques such as reinforcement learning are know to be intractable if they use a detailed model of the world, and also requires tuning of various parameters to give satisfactory performance. Therefore, for this research we opt to examine Q-learning for evolving a few basic behaviors viz. learning to fight, and planting the bomb for computer game bots. Furthermore, we experimented on how bots would use knowledge learned from abstract models to evolve its behavior in more detailed model of the world. Bots evolved using these techniques would become more pragmatic, believable and capable of showing human-like behavior. This will provide more realistic feel to the game and provide game programmers with an efficient learning technique for programming these bots

    COMBINED ARTIFICIAL INTELLIGENCE BEHAVIOUR SYSTEMS IN SERIOUS GAMING

    Get PDF
    This thesis proposes a novel methodology for creating Artificial Agents with semi-realistic behaviour, with such behaviour defined as overcoming common limitations of mainstream behaviour systems; rapidly switching between actions, ignoring “obvious” event priorities, etc. Behaviour in these Agents is not fully realistic as some limitations remain; Agents have a “perfect” knowledge about the surrounding environment, and an inability to transfer knowledge to other Agents (no communication). The novel methodology is achieved by hybridising existing Artificial Intelligence (AI) behaviour systems. In most artificial agents (Agents) behaviour is created using a single behaviour system, whereas this work combines several systems in a novel way to overcome the limitations of each. A further proposal is the separation of behavioural concerns into behaviour systems that are best suited to their needs, as well as describing a biologically inspired memory system that further aids in the production of semi-realistic behaviour. Current behaviour systems are often inherently limited, and in this work it is shown that by combining systems that are complementary to each other, these limitations can be overcome without the need for a workaround. This work examines in detail Belief Desire Intention systems, as well as Finite State Machines and explores how these methodologies can complement each other when combined appropriately. By combining these systems together a hybrid system is proposed that is both fast to react and simple to maintain by separating behaviours into fast-reaction (instinctual) and slow-reaction (behavioural) behaviours, and assigning these to the most appropriate system. Computational intelligence learning techniques such as Artificial Neural Networks have been intentionally avoided, as these techniques commonly present their data in a “black box” system, whereas this work aims to make knowledge explicitly available to the user. A biologically inspired memory system has further been proposed in order to generate additional behaviours in Artificial Agents, such as behaviour related to forgetfulness. This work explores how humans can quickly recall information while still being able to store millions of pieces of information, and how this can be achieved in an artificial system

    Curious Negotiator

    Full text link
    n negotiation the exchange of information is as important as the exchange of offers. The curious negotiator is a multiagent system with three types of agents. Two negotiation agents, each representing an individual, develop consecutive offers, supported by information, whilst requesting information from its opponent. A mediator agent, with experience of prior negotiations, suggests how the negotiation may develop. A failed negotiation is a missed opportunity. An observer agent analyses failures looking for new opportunities. The integration of negotiation theory and data mining enables the curious negotiator to discover and exploit negotiation opportunities. Trials will be conducted in electronic business

    Strategic negotiation and trust in diplomacy - the DipBlue approach

    Get PDF
    The study of games in Artificial Intelligence has a long tradition. Game playing has been a fertile environment for the development of novel approaches to build intelligent programs. Multi-agent systems (MAS), in particular, are a very useful paradigm in this regard, not only because multi-player games can be addressed using this technology, but most importantly because social aspects of agenthood that have been studied for years by MAS researchers can be applied in the attractive and controlled scenarios that games convey. Diplomacy is a multi-player strategic zero-sum board game, including as main research challenges an enormous search tree, the difficulty of determining the real strength of a position, and the accommodation of negotiation among players. Negotiation abilities bring along other social aspects, such as the need to perform trust reasoning in order to win the game. The majority of existing artificial players (bots) for Diplomacy do not exploit the strategic opportunities enabled by negotiation, focusing instead on search and heuristic approaches. This paper describes the development of DipBlue, an artificial player that uses negotiation in order to gain advantage over its opponents, through the use of peace treaties, formation of alliances and suggestion of actions to allies. A simple trust assessment approach is used as a means to detect and react to potential betrayals by allied players. DipBlue was built to work with DipGame, a MAS testbed for Diplomacy, and has been tested with other players of the same platform and variations of itself. Experimental results show that the use of negotiation increases the performance of bots involved in alliances, when full trust is assumed. In the presence of betrayals, being able to perform trust reasoning is an effective approach to reduce their impact. © Springer-Verlag Berlin Heidelberg 2015

    Agent-Oriented Methodology for Designing 3D Animated Characters

    Get PDF
    Agent Oriented Methodology (AOM) has been used as an alternative tool to modelling the production of 3D animated characters. Besides allowing strong engagement between production team members, the agent models also drive effective communication among them. This paper explores the adoption of AOM to model the cognitive capability of 3D animated characters. We extend and demonstrate how AOM can be used to model a BDI (Belief/Desire/Intention) cognitive architecture for 3D animated characters in a fire fighting and evacuation scenario. The contribution of this work is that it turns the AOM into a detailed design tool for a 3D production team. Although the AOM can serve as an engagement tool among various stakeholders, we further showcase the use of AOM as a tool for production design and development
    • …
    corecore