12 research outputs found

    Evaluating Go Game Records for Prediction of Player Attributes

    Full text link
    We propose a way of extracting and aggregating per-move evaluations from sets of Go game records. The evaluations capture different aspects of the games such as played patterns or statistic of sente/gote sequences. Using machine learning algorithms, the evaluations can be utilized to predict different relevant target variables. We apply this methodology to predict the strength and playing style of the player (e.g. territoriality or aggressivity) with good accuracy. We propose a number of possible applications including aiding in Go study, seeding real-work ranks of internet players or tuning of Go-playing programs

    A Dynamical Systems Approach for Static Evaluation in Go

    Full text link
    In the paper arguments are given why the concept of static evaluation has the potential to be a useful extension to Monte Carlo tree search. A new concept of modeling static evaluation through a dynamical system is introduced and strengths and weaknesses are discussed. The general suitability of this approach is demonstrated.Comment: IEEE Transactions on Computational Intelligence and AI in Games, vol 3 (2011), no

    Strategic Features for General Games

    Get PDF
    This short paper describes an ongoing research project that requires the automated self-play learning and evaluation of a large number of board games in digital form.We describe the approach we are taking to determine relevant features, for biasing MCTS playouts for arbitrary games played on arbitrary geometries. Benefits of our approach include efficient implementation, the potential to transfer learnt knowledge to new contexts, and the potential to explain strategic knowledge embedded in features in human-comprehensible terms

    Computing Elo Ratings of Move Patterns in the Game of Go

    Get PDF
    Move patterns are an essential method to incorporate domain knowledge into Go-playing programs. This paper presents a new Bayesian technique for supervised learning of such patterns from game records, based on a generalization of Elo ratings. Each sample move in the training data is considered as a victory of a team of pattern features. Elo ratings of individual pattern features are computed from these victories, and can be used in previously unseen positions to compute a probability distribution over legal moves. In this approach, several pattern features may be combined, without an exponential cost in the number of features. Despite a very small number of training games (652), this algorithm outperforms most previous pattern-learning algorithms, both in terms of mean log-evidence (−2.69), and prediction rate (34.9%). A 19x19 Monte-Carlo program improved with these patterns reached the level of the strongest classical programs

    Extracting tactics learned from self-play in general games

    Get PDF
    Local, spatial state-action features can be used to effectively train linear policies from self-play in a wide variety of board games. Such policies can play games directly, or be used to bias tree search agents. However, the resulting feature sets can be large, with a significant amount of overlap and redundancies between features. This is a problem for two reasons. Firstly, large feature sets can be computationally expensive, which reduces the playing strength of agents based on them. Secondly, redundancies and correlations between features impair the ability for humans to analyse, interpret, or understand tactics learned by the policies. We look towards decision trees for their ability to perform feature selection, and serve as interpretable models. Previous work on distilling policies into decision trees uses states as inputs, and distributions over the complete action space as outputs. In contrast, we propose and evaluate a variety of decision tree types, which take state-action pairs as inputs, and provide various different types of outputs on a per-action basis. An empirical evaluation over 43 different board games is presented, and two of those games are used as case studies where we attempt to interpret the discovered features

    Mind over machine : what Deep Blue taught us about chess, artificial intelligence, and the human spirit

    Get PDF
    Thesis (S.M. in Science Writing)--Massachusetts Institute of Technology, Dept. of Humanities, Graduate Program in Science Writing, 2007."September 2007."Includes bibliographical references (leaves 44-49).On May 11th 1997, the world watched as IBM's chess-playing computer Deep Blue defeated world chess champion Garry Kasparov in a six-game match. The reverberations of that contest touched people, and computers, around the world. At the time, it was difficult to assess the historical significance of the moment, but ten years after the fact, we can take a fresh look at the meaning of the computer's victory. With hindsight, we can see how Deep Blue impacted the chess community and influenced the fields of philosophy, artificial intelligence, and computer science in the long run. For the average person, Deep Blue embodied many of our misgivings about computers becoming our new partners in the information age. For researchers in the field it was emblematic of the growing pains experienced by the evolving field of AI over the previous half century. In the end, what might have seemed like a definitive, earth-shattering event was really the next step in our on-going journey toward understanding mind and machine. While Deep Blue was a milestone - the end of a long struggle to build a masterful chess machine - it was also a jumping off point for other lines of inquiry from new supercomputing projects to the further development of programs that play other games, such as Go. Ultimately, the lesson of Deep Blue's victory is that we will continue to accomplish technological feats we thought impossible just a few decades before. And as we reach each new goalpost, we will acclimate to our new position, recognize the next set of challenges before us, and push on toward the next target.by Barbara Christine Hoekenga.S.M.in Science Writin
    corecore