13,241 research outputs found

    A pattern-recognition theory of search in expert problem solving

    Get PDF
    Understanding how look-ahead search and pattern recognition interact is one of the important research questions in the study of expert problem-solving. This paper examines the implications of the template theory (Gobet & Simon, 1996a), a recent theory of expert memory, on the theory of problem solving in chess. Templates are "chunks" (Chase & Simon, 1973) that have evolved into more complex data structures and that possess slots allowing values to be encoded rapidly. Templates may facilitate search in three ways: (a) by allowing information to be stored into LTM rapidly; (b) by allowing a search in the template space in addition to a search in the move space; and (c) by compensating loss in the "mind's eye" due to interference and decay. A computer model implementing the main ideas of the theory is presented, and simulations of its search behaviour are discussed. The template theory accounts for the slight skill difference in average depth of search found in chess players, as well as for other empirical data

    Search versus Knowledge: An Empirical Study of Minimax on KRK

    Get PDF
    This article presents the results of an empirical experiment designed to gain insight into what is the effect of the minimax algorithm on the evaluation function. The experiment’s simulations were performed upon the KRK chess endgame. Our results show that dependencies between evaluations of sibling nodes in a game tree and an abundance of possibilities to commit blunders present in the KRK endgame are not sufficient to explain the success of the minimax principle in practical game-playing as was previously believed. The article shows that minimax in combination with a noisy evaluation function introduces a bias into the backed-up evaluations and argues that this bias is what masked the effectiveness of the minimax in previous studies

    Assessing Human Error Against a Benchmark of Perfection

    Full text link
    An increasing number of domains are providing us with detailed trace data on human decisions in settings where we can evaluate the quality of these decisions via an algorithm. Motivated by this development, an emerging line of work has begun to consider whether we can characterize and predict the kinds of decisions where people are likely to make errors. To investigate what a general framework for human error prediction might look like, we focus on a model system with a rich history in the behavioral sciences: the decisions made by chess players as they select moves in a game. We carry out our analysis at a large scale, employing datasets with several million recorded games, and using chess tablebases to acquire a form of ground truth for a subset of chess positions that have been completely solved by computers but remain challenging even for the best players in the world. We organize our analysis around three categories of features that we argue are present in most settings where the analysis of human error is applicable: the skill of the decision-maker, the time available to make the decision, and the inherent difficulty of the decision. We identify rich structure in all three of these categories of features, and find strong evidence that in our domain, features describing the inherent difficulty of an instance are significantly more powerful than features based on skill or time.Comment: KDD 2016; 10 page
    • …
    corecore