13 research outputs found

    Intelligent strategy for two-person non-random perfect information zero-sum game.

    Get PDF
    Tong Kwong-Bun.Thesis submitted in: December 2002.Thesis (M.Phil.)--Chinese University of Hong Kong, 2003.Includes bibliographical references (leaves 77-[80]).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- An Overview --- p.1Chapter 1.2 --- Tree Search --- p.2Chapter 1.2.1 --- Minimax Algorithm --- p.2Chapter 1.2.2 --- The Alpha-Beta Algorithm --- p.4Chapter 1.2.3 --- Alpha-Beta Enhancements --- p.5Chapter 1.2.4 --- Selective Search --- p.9Chapter 1.3 --- Construction of Evaluation Function --- p.16Chapter 1.4 --- Contribution of the Thesis --- p.17Chapter 1.5 --- Structure of the Thesis --- p.19Chapter 2 --- The Probabilistic Forward Pruning Framework --- p.20Chapter 2.1 --- Introduction --- p.20Chapter 2.2 --- The Generalized Probabilistic Forward Cuts Heuristic --- p.21Chapter 2.3 --- The GPC Framework --- p.24Chapter 2.3.1 --- The Alpha-Beta Algorithm --- p.24Chapter 2.3.2 --- The NegaScout Algorithm --- p.25Chapter 2.3.3 --- The Memory-enhanced Test Algorithm --- p.27Chapter 2.4 --- Summary --- p.27Chapter 3 --- The Fast Probabilistic Forward Pruning Framework --- p.30Chapter 3.1 --- Introduction --- p.30Chapter 3.2 --- The Fast GPC Heuristic --- p.30Chapter 3.2.1 --- The Alpha-Beta algorithm --- p.32Chapter 3.2.2 --- The NegaScout algorithm --- p.32Chapter 3.2.3 --- The Memory-enhanced Test algorithm --- p.35Chapter 3.3 --- Performance Evaluation --- p.35Chapter 3.3.1 --- Determination of the Parameters --- p.35Chapter 3.3.2 --- Result of Experiments --- p.38Chapter 3.4 --- Summary --- p.42Chapter 4 --- The Node-Cutting Heuristic --- p.43Chapter 4.1 --- Introduction --- p.43Chapter 4.2 --- Move Ordering --- p.43Chapter 4.2.1 --- Quality of Move Ordering --- p.44Chapter 4.3 --- Node-Cutting Heuristic --- p.46Chapter 4.4 --- Performance Evaluation --- p.48Chapter 4.4.1 --- Determination of the Parameters --- p.48Chapter 4.4.2 --- Result of Experiments --- p.50Chapter 4.5 --- Summary --- p.55Chapter 5 --- The Integrated Strategy --- p.56Chapter 5.1 --- Introduction --- p.56Chapter 5.2 --- "Combination of GPC, FGPC and Node-Cutting Heuristic" --- p.56Chapter 5.3 --- Performance Evaluation --- p.58Chapter 5.4 --- Summary --- p.63Chapter 6 --- Conclusions and Future Works --- p.64Chapter 6.1 --- Conclusions --- p.64Chapter 6.2 --- Future Works --- p.65Chapter A --- Examples --- p.67Chapter B --- The Rules of Chinese Checkers --- p.73Chapter C --- Application to Chinese Checkers --- p.75Bibliography --- p.7

    On forward pruning in game-tree search

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Learning to Play Othello with N-Tuple Systems

    Get PDF
    This paper investigates the use of n-tuple systems as position value functions for the game of Othello. The architecture is described, and then evaluated for use with temporal difference learning. Performance is compared with previously de-veloped weighted piece counters and multi-layer perceptrons. The n-tuple system is able to defeat the best performing of these after just five hundred games of self-play learning. The conclusion is that n-tuple networks learn faster and better than the other more conventional approaches

    Selective search in games of different complexity

    Get PDF

    Learning search decisions

    Get PDF

    Preference Learning for Move Prediction and Evaluation Function Approximation in Othello

    Get PDF
    This paper investigates the use of preference learning as an approach to move prediction and evaluation function approximation, using the game of Othello as a test domain. Using the same sets of features, we compare our approach with least squares temporal difference learning, direct classification, and with the Bradley-Terry model, fitted using minorization-maximization (MM). The results show that the exact way in which preference learning is applied is critical to achieving high performance. Best results were obtained using a combination of board inversion and pair-wise preference learning. This combination significantly outperformed the others under test, both in terms of move prediction accuracy, and in the level of play achieved when using the learned evaluation function as a move selector during game play
    corecore