10 research outputs found

    Game Development using Panda 3D Game Engine

    Get PDF
    This paper explores the features of panda 3d game engine and the AI algorithm used in creating games. Here we propose the A* algorithm which is used in game development and explain its merits and demerits with other path finding algorithms. We describe the importance of AI in games and even understand how to A* algorithm works and also how to implement A* algorithm in python. DOI: 10.17762/ijritcc2321-8169.15022

    Natively Implementing Deep Reinforcement Learning into a Game Engine

    Get PDF
    Artificial intelligence (AI) increases the immersion that players can have while playing games. Modern game engines, a middleware software used to create games, implement simple AI behaviors that developers can use. Advanced AI behaviors must be implemented manually by game developers, which decreases the likelihood of game developers using advanced AI due to development overhead. A custom game engine and custom AI architecture that handled deep reinforcement learning was designed and implemented. Snake was created using the custom game engine to test the feasibility of natively implementing an AI architecture into a game engine. A snake agent was successfully trained using the AI architecture, but the learned behavior was suboptimal. Although the learned behavior was suboptimal, the AI architecture was successfully implemented into a custom game engine because a behavior was successfully learned

    Dynamic Difficulty Adjustment

    Get PDF
    One of the challenges that a computer game developer faces when creating a new game is getting the difficulty right. Providing a game with an ability to automatically scale the difficulty depending on the current player would make the games more engaging over longer time. In this work we aim at a dynamic difficulty adjustment algorithm that can be used as a black box: universal, nonintrusive, and with guarantees on its performance. While there are a few commercial games that boast about having such a system, as well as a few published results on this topic, to the best of our knowledge none of them satisfy all three of these properties. On the way to our destination we first consider a game as an interaction between a player and her opponent. In this context, assuming their goals are mutually exclusive, difficulty adjustment consists of tuning the skill of the opponent to match the skill of the player. We propose a way to estimate the latter and adjust the former based on ranking the moves available to each player. Two sets of empirical experiments demonstrate the power, but also the limitations of this approach. Most importantly, the assumptions we make restrict the class of games it can be applied to. Looking for universality, we drop the constraints on the types of games we consider. We rely on the power of supervised learning and use the data collected from game testers to learn models of difficulty adjustment, as well as a mapping from game traces to models. Given a short game trace, the corresponding model tells the game what difficulty adjustment should be used. Using a self-developed game, we show that the predicted adjustments match players' preferences. The quality of the difficulty models depends on the quality of existing training data. The desire to dispense with the need for it leads us to the last approach. We propose a formalization of dynamic difficulty adjustment as a novel learning problem in the context of online learning and provide an algorithm to solve it, together with an upper bound on its performance. We show empirical results obtained in simulation and in two qualitatively different games with human participants. Due to its general nature, this algorithm can indeed be used as a black box for dynamic difficulty adjustment: It is applicable to any game with various difficulty states; it does not interfere with the player's experience; and it has a theoretical guarantee on how many mistakes it can possibly make

    Qualitative and Quantitative Solution Diversity in Heuristic-Search and Case-Based Planning

    Get PDF
    Planning is a branch of Artificial Intelligence (AI) concerned with projecting courses of actions for executing tasks and reaching goals. AI Planning helps increase the autonomy of artificially intelligent agents and decrease the cognitive load burdening human planners working in challenging domains, such as the Mars exploration projects. Approaches to AI planning include first-principles heuristic search planning and case-based planning. The former conducts a heuristic-guided search in the solution space, while the latter generates new solutions by adapting solutions to previously-solved problems.The ability to generate not just one solution, but a set of meaningfully diverse solutions to each planning problem helps cater to a wider variety of user preferences and needs (which it may be difficult or even unfeasible to acquire and/or represent in their entirety), produce viable alternative courses of action to fall back on in case of failure, counter varied threats in intrusion detection, render computer games more compelling, and provide representative samples of the vast search spaces of planning problems.This work describes a general framework for generating diverse sets of solutions (i.e. courses of action) to planning problems. The general diversity-aware planning algorithm consists of iteratively generating solutions using a composite candidate-solution evaluation criterion taking into account both how promising the candidate solutions appear in their own right and on how likely they are to increase the overall diversity of the final set of solutions. This estimate of diversity is based on distance metrics, i.e. measures of the dissimilarity between two solutions. Distance metrics can be quantitative or qualitative.Quantitative distance measures are domain-independent. They require minimum knowledge engineering, but may not reflect dissimilarities that are truly meaningful. Qualitative distance metrics are domain-specific and reflect, based on the domain knowledge encoded within them, the kind of meaningful dissimilarities that might be identified by a person familiar with the domain.Based on the general framework for diversity-aware planning, three domain-independent planning algorithms have been implemented and are described and evaluated herein. DivFF is a diverse heuristic search planner for deterministic planning domains (i.e. domains for which the assumption is made that any action can only have one possible outcome). DivCBP is a diverse case-based planner, also for deterministic planning domains. DivNDP is a heuristic search planner for nondeterministic planning domains (i.e. domains the descriptions of which include actions with multiple possible outcomes). The experimental evaluation of the three algorithms is conducted on a computer game domain, chosen for its challenging characteristics, which include nondeterminism and dynamism. The generated courses of action are run in the game in order to ascertain whether they affect the game environment in diverse ways. This constitutes the test of their genuine diversity, which cannot be evaluated accurately based solely on their low-level structure.It is shown that all proposed planning systems successfully generate sets of diverse solutions using varied criteria for assessing solution dissimilarity. Qualitatively-diverse solution sets are demonstrated to constantly produce more diverse effects in the game environment than quantitatively-diverse solution sets.A comparison between the two planning systems for deterministic domains, DivCBP and DivFF, reveals the former to be more successful at consistently generating diverse sets of solutions. The reasons for this are investigated, thus contributing to the literature of comparative studies of first-principles and case-based planning approaches. Finally, an application of diversity in planning is showcased: simulating personality-trait variation in computer game characters. Sets of diverse solutions to both deterministic and nondeterministic planning problems are shown to successfully create diverse character behavior in the evaluation environment

    Effective Team Strategies using Dynamic Scripting

    Get PDF
    Forming effective team strategies using heterogeneous agents to accomplish a task can be a challenging problem. The number of combinations of actions to look through can be enormous, and having an agent that is really good at a particular sub-task is no guarantee that agent will perform well on a team with members with different abilities. Dynamic Scripting has been shown to be an effective way of improving behaviours with adaptive game AI. We present an approach that modifies the scripting process to account for the other agents in a game. By analyzing an agent\u27s allies and opponents we can create better starting scripts for the agents to use. Creating better starting points for the Dynamic Scripting process and will minimize the number of iterations needed to learn effective strategies, creating a better overall gaming experience

    Adaptive Agent Architectures in Modern Virtual Games

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Effective and Diverse Adaptive Game AI

    No full text
    corecore