1,143 research outputs found

    How I won the "Chess Ratings - Elo vs the Rest of the World" Competition

    Full text link
    This article discusses in detail the rating system that won the kaggle competition "Chess Ratings: Elo vs the rest of the world". The competition provided a historical dataset of outcomes for chess games, and aimed to discover whether novel approaches can predict the outcomes of future games, more accurately than the well-known Elo rating system. The winning rating system, called Elo++ in the rest of the article, builds upon the Elo rating system. Like Elo, Elo++ uses a single rating per player and predicts the outcome of a game, by using a logistic curve over the difference in ratings of the players. The major component of Elo++ is a regularization technique that avoids overfitting these ratings. The dataset of chess games and outcomes is relatively small and one has to be careful not to draw "too many conclusions" out of the limited data. Many approaches tested in the competition showed signs of such an overfitting. The leader-board was dominated by attempts that did a very good job on a small test dataset, but couldn't generalize well on the private hold-out dataset. The Elo++ regularization takes into account the number of games per player, the recency of these games and the ratings of the opponents. Finally, Elo++ employs a stochastic gradient descent scheme for training the ratings, and uses only two global parameters (white's advantage and regularization constant) that are optimized using cross-validation

    When Are We Done with Games?

    Get PDF

    Phoenix-Chess strategy or revisiting the algorithm for playing in Chess with incomplete information

    Full text link
    We present here the new insight or revisiting the algorithm for playing in Chess with incomplete information (which can be recognized by its newly short-name as Phoenix-Chess strategy). The only difference with respect to the classical variant of Chess-game is that each rook after its having been captured by enemy chess piece in the proccess of gaming is not to be eliminated from the current game, but this rook is assumed being under virtual repairing during next N-steps (the required number of N is discussed in the current research). Then afterwards, such rook will be introduced in game again during maximal N-steps if only the chessboard square (on which it was captured previously) has not been occupied at previous step. In this case, Phoenix-Chess can be classified as game without predictable horizon of planning, so this kind of game should be considered as Chess-like games with incomplete information.Comment: 14 pages, 5 Figures; Key words: Chess-like games, Phoenix-Chess strategy, incomplete information. arXiv admin note: text overlap with arXiv:2009.04374 by other author

    Recent Advances in General Game Playing

    Get PDF
    The goal of General Game Playing (GGP) has been to develop computer programs that can perform well across various game types. It is natural for human game players to transfer knowledge from games they already know how to play to other similar games. GGP research attempts to design systems that work well across different game types, including unknown new games. In this review, we present a survey of recent advances (2011 to 2014) in GGP for both traditional games and video games. It is notable that research on GGP has been expanding into modern video games. Monte-Carlo Tree Search and its enhancements have been the most influential techniques in GGP for both research domains. Additionally, international competitions have become important events that promote and increase GGP research. Recently, a video GGP competition was launched. In this survey, we review recent progress in the most challenging research areas of Artificial Intelligence (AI) related to universal game playing

    Poker as a Domain of Expertise

    Get PDF
    Poker is a game of skill and chance involving economic decision-making under uncertainty. It is also a complex but well-defined real-world environment with a clear rule-structure. As such, poker has strong potential as a model system for studying high-stakes, high-risk expert performance. Poker has been increasingly used as a tool to study decision-making and learning, as well as emotion self-regulation. In this review, we discuss how these studies have begun to inform us about the interaction between emotions and technical skill, and how expertise develops and depends on these two factors. Expertise in poker critically requires both mastery of the technical aspects of the game, and proficiency in emotion regulation; poker thus offers a good environment for studying these skills in controlled experimental settings of high external validity.We conclude by suggesting ideas for future research on expertise, with new insights provided by poker.Peer reviewe

    PokerKit: A Comprehensive Python Library for Fine-Grained Multi-Variant Poker Game Simulations

    Full text link
    PokerKit is an open-source Python library designed to overcome the restrictions of existing poker game simulation and hand evaluation tools, which typically support only a handful of poker variants and lack flexibility in game state control. In contrast, PokerKit significantly expands this scope by supporting an extensive array of poker variants and it provides a flexible architecture for users to define their custom games. This paper details the design and implementation of PokerKit, including its intuitive programmatic API, multi-variant game support, and a unified hand evaluation suite across different hand types. The flexibility of PokerKit allows for applications in diverse areas, such as poker AI development, tool creation, and online poker casino implementation. PokerKit's reliability has been established through static type checking, extensive doctests, and unit tests, achieving 97\% code coverage. The introduction of PokerKit represents a significant contribution to the field of computer poker, fostering future research and advanced AI development for a wide variety of poker games.Comment: 6 pages, 1 figure, submission to IEEE Transactions on Game

    Tailoring a psychophysiologically driven rating system

    Get PDF
    Humans have always been interested in ways to measure and compare their performances to establish who is best at a particular activity. The first Olympic Games, for instance, were carried out in 776 BC, and it was a defining moment in history where ranking based competitive activities managed to reach the general populous. Every competition must face the issue of how to evaluate and rank competitors, and often rules are required to account for many different aspects such as variations in conditions, the ability to cheat, and, of course, the value of entertainment. Nowadays, measurements are performed out through various rating systems, which considers the outcomes of the activity to rate the participants. However, they do not seem to address the psychological aspects of an individual in a competition. This dissertation employs several psychophysiological assessment instruments intending to facilitate the acquisition of skill level rating in competitive gaming. To do so, an exergame that uses non-conventional inputs, such as body tracking to prevent input biases, was developed. The sample size of this study is ten, and the participants were put on a round-robin tournament to provide equal intervals between games for each player. After analyzing the outcome of the competition, it revealed some critical insights on the psychophysiological instruments; Especially the significance of Flow in terms of the prolificacy of a player. Although the findings did not provide an alternative for the traditional rating systems, it shows the importance of considering other aspects of the competition, such as psychophysiological metrics to fine-tune the rating. These potentially reveal more in-depth insight into the competition in comparison to just the binary outcome

    A Match in Time Saves Nine: Deterministic Online Matching With Delays

    Full text link
    We consider the problem of online Min-cost Perfect Matching with Delays (MPMD) introduced by Emek et al. (STOC 2016). In this problem, an even number of requests appear in a metric space at different times and the goal of an online algorithm is to match them in pairs. In contrast to traditional online matching problems, in MPMD all requests appear online and an algorithm can match any pair of requests, but such decision may be delayed (e.g., to find a better match). The cost is the sum of matching distances and the introduced delays. We present the first deterministic online algorithm for this problem. Its competitive ratio is O(mlog25.5)O(m^{\log_2 5.5}) =O(m2.46) = O(m^{2.46}), where 2m2 m is the number of requests. This is polynomial in the number of metric space points if all requests are given at different points. In particular, the bound does not depend on other parameters of the metric, such as its aspect ratio. Unlike previous (randomized) solutions for the MPMD problem, our algorithm does not need to know the metric space in advance
    corecore