10,610 research outputs found

    Decision-making and strategic thinking through analogies

    Get PDF
    When faced with a complex scenario, how does understanding arise in oneā€™s mind? How does one integrate disparate cues into a global, meaningful whole? Consider the chess game: how do humans avoid the combinatorial explosion? How are abstract ideas represented? The purpose of this paper is to propose a new computational model of human chess intuition and intelligence. We suggest that analogies and abstract roles are crucial to solving these landmark problems. We present a proof-of-concept model, in the form of a computational architecture, which may be able to account for many crucial aspects of human intuition, such as (i) concentration of attention to relevant aspects, (ii) \ud how humans may avoid the combinatorial explosion, (iii) perception of similarity at a strategic level, and (iv) a state of meaningful anticipation over how a global scenario \ud may evolve

    Data assurance in opaque computations

    Get PDF
    The chess endgame is increasingly being seen through the lens of, and therefore effectively defined by, a data ā€˜modelā€™ of itself. It is vital that such models are clearly faithful to the reality they purport to represent. This paper examines that issue and systems engineering responses to it, using the chess endgame as the exemplar scenario. A structured survey has been carried out of the intrinsic challenges and complexity of creating endgame data by reviewing the past pattern of errors during work in progress, surfacing in publications and occurring after the data was generated. Specific measures are proposed to counter observed classes of error-risk, including a preliminary survey of techniques for using state-of-the-art verification tools to generate EGTs that are correct by construction. The approach may be applied generically beyond the game domain

    Multimodal Observation and Interpretation of Subjects Engaged in Problem Solving

    Get PDF
    In this paper we present the first results of a pilot experiment in the capture and interpretation of multimodal signals of human experts engaged in solving challenging chess problems. Our goal is to investigate the extent to which observations of eye-gaze, posture, emotion and other physiological signals can be used to model the cognitive state of subjects, and to explore the integration of multiple sensor modalities to improve the reliability of detection of human displays of awareness and emotion. We observed chess players engaged in problems of increasing difficulty while recording their behavior. Such recordings can be used to estimate a participant's awareness of the current situation and to predict ability to respond effectively to challenging situations. Results show that a multimodal approach is more accurate than a unimodal one. By combining body posture, visual attention and emotion, the multimodal approach can reach up to 93% of accuracy when determining player's chess expertise while unimodal approach reaches 86%. Finally this experiment validates the use of our equipment as a general and reproducible tool for the study of participants engaged in screen-based interaction and/or problem solving

    Chess Endgames and Neural Networks

    Get PDF
    The existence of endgame databases challenges us to extract higher-grade information and knowledge from their basic data content. Chess players, for example, would like simple and usable endgame theories if such holy grail exists: endgame experts would like to provide such insights and be inspired by computers to do so. Here, we investigate the use of artificial neural networks (NNs) to mine these databases and we report on a first use of NNs on KPK. The results encourage us to suggest further work on chess applications of neural networks and other data-mining techniques

    Assessing Human Error Against a Benchmark of Perfection

    Full text link
    An increasing number of domains are providing us with detailed trace data on human decisions in settings where we can evaluate the quality of these decisions via an algorithm. Motivated by this development, an emerging line of work has begun to consider whether we can characterize and predict the kinds of decisions where people are likely to make errors. To investigate what a general framework for human error prediction might look like, we focus on a model system with a rich history in the behavioral sciences: the decisions made by chess players as they select moves in a game. We carry out our analysis at a large scale, employing datasets with several million recorded games, and using chess tablebases to acquire a form of ground truth for a subset of chess positions that have been completely solved by computers but remain challenging even for the best players in the world. We organize our analysis around three categories of features that we argue are present in most settings where the analysis of human error is applicable: the skill of the decision-maker, the time available to make the decision, and the inherent difficulty of the decision. We identify rich structure in all three of these categories of features, and find strong evidence that in our domain, features describing the inherent difficulty of an instance are significantly more powerful than features based on skill or time.Comment: KDD 2016; 10 page

    Simulating What?

    Get PDF
    Any attempt to simulate science has first to say what science is. This involves asking three questions: 1) The Scope Question: What bit of science is the target? It is immensely confusing (as the history of these debates shows), if one simulates some little aspect of science, as in the case of BACON, and then claims that one has built a machine that can 'do science'. 2) The Micro-World Question: Is the criterion of success the reproduction of human science ā€“ with all the same findings turning up ā€“ or the simulation of something that is believed to be a scientific process with results that pertain only to the world of the simulation which do not correspond to the outcome of human science as we know it? If the latter it will be important to be sure that one is not merely developing a 'micro-world' ā€“ a world so tidied up for the purposes of simulation that it does not bear on human science. 3) The Chess Question: Even if the idea to reach the same results as has been reached by human science, does it have to be by 'the same' means in order to count as a simulation of human science? I call it the 'chess question' because Deep Blue does not play in the same way as human grand masters but is still better at winning.Science, Language, Demarcation, Micro-World, BACON, Chess
    • ā€¦
    corecore