10,563 research outputs found
Recommended from our members
Deeper model endgame analysis
A reference model of Fallible Endgame Play has been implemented and exercised with the chess-engine WILHELM. Past experiments have demonstrated the value of the model and the robustness of decisions based on it: experiments agree well with a Markov Model theory. Here, the reference model is exercised on the well-known endgame KBBKN
Using the UM dynamical cores to reproduce idealised 3D flows
We demonstrate that both the current (New Dynamics), and next generation
(ENDGame) dynamical cores of the UK Met Office global circulation model, the
UM, reproduce consistently, the long-term, large-scale flows found in several
published idealised tests. The cases presented are the Held-Suarez test, a
simplified model of Earth (including a stratosphere), and a hypothetical
tidally locked Earth. Furthermore, we show that using simplifications to the
dynamical equations, which are expected to be justified for the physical
domains and flow regimes we have studied, and which are supported by the
ENDGame dynamical core, also produces matching long-term, large-scale flows.
Finally, we present evidence for differences in the detail of the planetary
flows and circulations resulting from improvements in the ENDGame formulation
over New Dynamics.Comment: 34 Pages, 23 Figures. Accepted for publication in Geoscientific Model
Development (pre-proof version
Winding Down the Atlantic Philanthropies: 2009-2010: Beginning the End Game
Reviews late-term program planning, including envisioning the end of the foundation and translating that vision into concrete plans. Examines challenges and opportunities for final grantmaking in the Population Health and Children and Youth programs
Search versus Knowledge: An Empirical Study of Minimax on KRK
This article presents the results of an empirical experiment designed to gain insight into what is the effect of the minimax algorithm on the evaluation function. The experiment’s simulations were performed upon the KRK chess endgame. Our results show that dependencies between evaluations of sibling nodes in a game tree and an abundance of possibilities to commit blunders present in the KRK endgame are not sufficient to explain the success of the minimax principle in practical game-playing as was previously believed. The article shows that minimax in combination with a noisy evaluation function introduces a bias into the backed-up evaluations and argues that this bias is what masked the effectiveness of the minimax in previous studies
Recommended from our members
Data assurance in opaque computations
The chess endgame is increasingly being seen through the lens of, and therefore effectively defined by, a data ‘model’ of itself. It is vital that such models are clearly faithful to the reality they purport to represent. This paper examines that issue and systems engineering responses to it, using the chess endgame as the exemplar scenario. A structured survey has been carried out of the intrinsic challenges and complexity of creating endgame data by reviewing the past pattern of errors during work in progress, surfacing in publications and occurring after the data was generated. Specific measures are proposed to counter observed classes of error-risk, including a preliminary survey of techniques for using state-of-the-art verification tools to generate EGTs that are correct by construction. The approach may be applied generically beyond the game domain
Recommended from our members
Performance and prediction: Bayesian modelling of fallible choice in chess
Evaluating agents in decision-making applications requires assessing their skill and predicting their behaviour. Both are well developed in Poker-like situations, but less so in
more complex game and model domains. This paper addresses both tasks by using Bayesian inference in a benchmark space of reference agents. The concepts are explained and demonstrated using the game of chess but the model applies generically to any domain with quantifiable options and fallible choice. Demonstration applications address questions frequently asked by the chess community regarding the stability of the rating scale, the comparison of players of different eras and/or leagues, and controversial incidents possibly involving fraud. The last include alleged under-performance, fabrication of tournament results, and clandestine use of computer advice during competition. Beyond the model world of games, the aim is to improve fallible human performance in complex, high-value tasks
Recommended from our members
Reference fallible endgame play
A reference model of fallible endgame play is defined in terms of a spectrum of endgame players whose play ranges in competence from the optimal to the anti-optimal choice of move. They may be used as suitably skilled practice partners, to assess a player, to differentiate between otherwise equi-optimal moves, to promote or expedite a game result, to run Monte-Carlo simulations, and to identify the difficulty of a position or a whole endgame
Recommended from our members
Stalemate and 'DTS' depth to stalemate endgame tables
Stalemating the opponent in chess has given rise to various opinions as to the nature of that result and the reward it should properly receive. Here, following Lasker and Reti, we propose that ‘stalemate’ is a secondary goal, superior to a draw by agreement or rule – but inferior to mate. We report the work of ‘Aloril’ who has created endgame tables holding both ‘DTM’ depth to mate and ‘DTS’ depth to stalemate data, and who should be regarded as the prime author of this paper. Further, we look at the classification of ‘Chess Stalemate Studies’ in the context of a ‘Lasker Chess’ which recognises the stalemate goal
Recommended from our members
Gentlemen, stop your engines!
For fifty years, computer chess has pursued an original goal of Artificial Intelligence, to produce a chess-engine to compete at the highest level. The goal has arguably been achieved, but that success has made it harder to answer questions about the relative playing strengths of
man and machine. The proposal here is to approach such questions in a counter-intuitive way, handicapping or stopping-down chess engines so that they play less well. The intrinsic lack of man-machine games may be side-stepped by analysing existing games to place computer engines
as accurately as possible on the FIDE ELO scale of human play. Move-sequences may also be assessed for likelihood if computer-assisted cheating is suspected
Recommended from our members
Data-mining chess databases
This is a report on the data-mining of two chess databases, the objective being to compare their sub-7-man content with perfect play as documented in Nalimov endgame tables. Van der Heijden’s ENDGAME STUDY DATABASE IV is a definitive collection of 76,132 studies in which White should have an essentially unique route to the stipulated goal. Chessbase’s BIG DATABASE 2010 holds some 4.5 million games. Insight gained into both database content and data-mining has led to some delightful surprises and created a further agenda
- …