9 research outputs found

    Automated Playtesting in Collectible Card Games using Evolutionary Algorithms: a Case Study in HearthStone

    Get PDF
    Collectible card games have been among the most popular and profitable products of the entertainment industry since the early days of Magic: The GatheringTM in the nineties. Digital versions have also appeared, with HearthStone: Heroes of WarCraftTM being one of the most popular. In Hearthstone, every player can play as a hero, from a set of nine, and build his/her deck before the game from a big pool of available cards, including both neutral and hero-specific cards. This kind of games offers several challenges for researchers in artificial intelligence since they involve hidden information, unpredictable behaviour, and a large and rugged search space. Besides, an important part of player engagement in such games is a periodical input of new cards in the system, which mainly opens the door to new strategies for the players. Playtesting is the method used to check the new card sets for possible design flaws, and it is usually performed manually or via exhaustive search; in the case of Hearthstone, such test plays must take into account the chosen hero, with its specific kind of cards. In this paper, we present a novel idea to improve and accelerate the playtesting process, systematically exploring the space of possible decks using an Evolutionary Algorithm (EA). This EA creates HearthStone decks which are then played by an AI versus established human-designed decks. Since the space of possible combinations that are play-tested is huge, search through the space of possible decks has been shortened via a new heuristic mutation operator, which is based on the behaviour of human players modifying their decks. Results show the viability of our method for exploring the space of possible decks and automating the play-testing phase of game design. The resulting decks, that have been examined for balancedness by an expert player, outperform human-made ones when played by the AI; the introduction of the new heuristic operator helps to improve the obtained solutions, and basing the study on the whole set of heroes shows its validity through the whole range of decks

    A benchmark for cooperative coevolution

    Get PDF
    Cooperative co-evolution algorithms (CCEA) are a thriving sub-field of evolutionary computation. This class of algorithms makes it possible to exploit more efficiently the artificial Darwinist scheme, as soon as an optimisation problem can be turned into a co-evolution of interdependent sub-parts of the searched solution. Testing the efficiency of new CCEA concepts, however, it is not straightforward: while there is a rich literature of benchmarks for more traditional evolutionary techniques, the same does not hold true for this relatively new paradigm. We present a benchmark problem designed to study the behavior and performance of CCEAs, modeling a search for the optimal placement of a set of lamps inside a room. The relative complexity of the problem can be adjusted by operating on a single parameter. The fitness function is a trade-off between conflicting objectives, so the performance of an algorithm can be examined by making use of different metrics. We show how three different cooperative strategies, Parisian Evolution, Group Evolution and Allopatric Group Evolution, can be applied to the problem. Using a Classical Evolution approach as comparison, we analyse the behavior of each algorithm in detail, with respect to the size of the proble
    corecore