12,122 research outputs found
Critical Analysis of Decision Making Experience with a Machine Learning Approach in Playing Ayo Game
The major goal in defining and examining game
scenarios is to find good strategies as solutions to the game. A
plausible solution is a recommendation to the players on how to play
the game, which is represented as strategies guided by the various
choices available to the players. These choices invariably compel the
players (decision makers) to execute an action following some
conscious tactics. In this paper, we proposed a refinement-based
heuristic as a machine learning technique for human-like decision
making in playing Ayo game. The result showed that our machine
learning technique is more adaptable and more responsive in making
decision than human intelligence. The technique has the advantage
that a search is astutely conducted in a shallow horizon game tree.
Our simulation was tested against Awale shareware and an appealing
result was obtained
A Survey of Monte Carlo Tree Search Methods
Monte Carlo tree search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarize the results from the key game and nongame domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work
Improving Search with Supervised Learning in Trick-Based Card Games
In trick-taking card games, a two-step process of state sampling and
evaluation is widely used to approximate move values. While the evaluation
component is vital, the accuracy of move value estimates is also fundamentally
linked to how well the sampling distribution corresponds the true distribution.
Despite this, recent work in trick-taking card game AI has mainly focused on
improving evaluation algorithms with limited work on improving sampling. In
this paper, we focus on the effect of sampling on the strength of a player and
propose a novel method of sampling more realistic states given move history. In
particular, we use predictions about locations of individual cards made by a
deep neural network --- trained on data from human gameplay - in order to
sample likely worlds for evaluation. This technique, used in conjunction with
Perfect Information Monte Carlo (PIMC) search, provides a substantial increase
in cardplay strength in the popular trick-taking card game of Skat.Comment: Accepted for publication at AAAI-1
Developing Artificial Intelligence Agents for a Turn-Based Imperfect Information Game
Artificial intelligence (AI) is often employed to play games, whether to entertain human opponents, devise and test strategies, or obtain other analytical data. Games with hidden information require specific approaches by the player. As a result, the AI must be equipped with methods of operating without certain important pieces of information while being aware of the resulting potential dangers. The computer game GNaT was designed as a testbed for AI strategies dealing specifically with imperfect information. Its development and functionality are described, and the results of testing several strategies through AI agents are discussed
- …