59,735 research outputs found

    Optimization of Units Movement in Turn-Based Strategy Game

    Get PDF
    Each game has an artificial intelligence that is used to fight the player, which will provide more challenge. But in some strategy games, unit movements are usually done using simple considerations. For example the rest of unit lives, unit strength, and so forth. In this study, a turn based strategy game is designed using genetic algorithm to control the movement of the enemy armies. In each turn, the enemy will move based on the potential level of produced damage to and from the opponent, the distance between the units, and the distance to the opponent�s building. The genetic algorithm�s chromosome for each unit contains the following information: the position where the unit will move, who is the target, and the distance to the armies� centroid. Distance to centroid (midpoint) is used to force the units to remain in the set. The genetic algorithm process is used to control when and where the units will move or attack. From the test results, the genetic algorithm can create a more powerful enemy than the randomly moving enemy because it creates a higher winning chance of enemy units and acts more efficiently, in terms of the usage of money, the damage produced to the opponent, and the received damage

    Developing Artificial Intelligence Agents for a Turn-Based Imperfect Information Game

    Get PDF
    Artificial intelligence (AI) is often employed to play games, whether to entertain human opponents, devise and test strategies, or obtain other analytical data. Games with hidden information require specific approaches by the player. As a result, the AI must be equipped with methods of operating without certain important pieces of information while being aware of the resulting potential dangers. The computer game GNaT was designed as a testbed for AI strategies dealing specifically with imperfect information. Its development and functionality are described, and the results of testing several strategies through AI agents are discussed

    Traditional Wisdom and Monte Carlo Tree Search Face-to-Face in the Card Game Scopone

    Get PDF
    We present the design of a competitive artificial intelligence for Scopone, a popular Italian card game. We compare rule-based players using the most established strategies (one for beginners and two for advanced players) against players using Monte Carlo Tree Search (MCTS) and Information Set Monte Carlo Tree Search (ISMCTS) with different reward functions and simulation strategies. MCTS requires complete information about the game state and thus implements a cheating player while ISMCTS can deal with incomplete information and thus implements a fair player. Our results show that, as expected, the cheating MCTS outperforms all the other strategies; ISMCTS is stronger than all the rule-based players implementing well-known and most advanced strategies and it also turns out to be a challenging opponent for human players.Comment: Preprint. Accepted for publication in the IEEE Transaction on Game

    ANN for Tic-Tac-Toe Learning

    Get PDF
    Throughout this research, imposing the training of an Artificial Neural Network (ANN) to play tic-tac-toe bored game, by training the ANN to play the tic-tac-toe logic using the set of mathematical combination of the sequences that could be played by the system and using both the Gradient Descent Algorithm explicitly and the Elimination theory rules implicitly. And so on the system should be able to produce imunate amalgamations to solve every state within the game course to make better of results of winnings or getting draw

    ANN for Tic-Tac-Toe Learning

    Get PDF
    Throughout this research, imposing the training of an Artificial Neural Network (ANN) to play tic-tac-toe bored game, by training the ANN to play the tic-tac-toe logic using the set of mathematical combination of the sequences that could be played by the system and using both the Gradient Descent Algorithm explicitly and the Elimination theory rules implicitly. And so on the system should be able to produce imunate amalgamations to solve every state within the game course to make better of results of winnings or getting draw

    Approximating n-player behavioural strategy nash equilibria using coevolution

    Get PDF
    Coevolutionary algorithms are plagued with a set of problems related to intransitivity that make it questionable what the end product of a coevolutionary run can achieve. With the introduction of solution concepts into coevolution, part of the issue was alleviated, however efficiently representing and achieving game theoretic solution concepts is still not a trivial task. In this paper we propose a coevolutionary algorithm that approximates behavioural strategy Nash equilibria in n-player zero sum games, by exploiting the minimax solution concept. In order to support our case we provide a set of experiments in both games of known and unknown equilibria. In the case of known equilibria, we can confirm our algorithm converges to the known solution, while in the case of unknown equilibria we can see a steady progress towards Nash. Copyright 2011 ACM
    • …
    corecore