777 research outputs found

    Implicit learning of expert chess knowledge

    Get PDF
    This article discusses how CHREST's mechanisms lead to the implicit learning of a large number of chunks, which underpin (expert) behaviour in a number of domains. Results from chess research are discussed

    Chunks hierarchies and retrieval structures: Comments on Saariluoma and Laine

    Get PDF
    The empirical results of Saariluoma and Laine (in press) are discussed and their computer simulations are compared with CHREST, a computational model of perception, memory and learning in chess. Mathematical functions such as power functions and logarithmic functions account for Saariluoma and Laine's (in press) correlation heuristic and for CHREST very well. However, these functions fit human data well only with game positions, not with random positions. As CHREST, which learns using spatial proximity, accounts for the human data as well as Saariluoma and Laine's (in press) correlation heuristic, their conclusion that frequency-based heuristics match the data better than proximity-based heuristics is questioned. The idea of flat chunk organisation and its relation to retrieval structures is discussed. In the conclusion, emphasis is given to the need for detailed empirical data, including information about chunk structure and types of errors, for discriminating between various learning algorithms

    Discrimination nets, production systems and semantic networks: Elements of a unified framework

    Get PDF
    A number of formalisms have been used in cognitive science to account for cognition in general and learning in particular. While this variety denotes a healthy state of theoretical development, it somewhat hampers communication between researchers championing different approaches and makes comparison between theories difficult. In addition, it has the consequence that researchers tend to study cognitive phenomena best suited to their favorite formalism. It is therefore desirable to propose frameworks which span traditional formalisms. In this paper, we pursue two goals: first, to show how three (symbolic) formalisms widely used in theorizing about and in simulating human cognition—discrimination nets, semantic networks and production systems—may be used in a single, conceptually unified framework; and second to show how this framework can be used to develop a comprehensive theory of learning. Within this theory, learning is construed as (a) developing perceptual and conceptual discrimination nets, (b) adding semantic links, and (c) creating productions. We start by giving a brief description of each of these formalisms; we then describe a theoretical framework that incorporates the three formalisms, and show how these may coexist. Throughout this description, examples from chess, a highly studied field of expertise and a classical object of study in cognitive science, will be provided. These examples will illustrate how the framework can be worked out into a more detailed cognitive theory. Finally, we draw some theoretical consequences of the framework proposed here

    Individual data analysis and Unified Theories of Cognition: A methodological proposal

    Get PDF
    Unified theories regularly appear in psychology. They also regularly fail to fulfil all of their goals. Newell (1990) called for their revival, using computer modelling as a way to avoid the pitfalls of previous attempts. His call, embodied in the Soar project has so far, however, failed to produce the breakthrough it promised. One of the reasons for the lack of success of Newell’s approach is that the methodology commonly used in psychology, based on controlling potentially confounding variables by using group data, is not the best way forward for developing unified theories of cognition. Instead, we propose an approach where (a) the problems related to group averages are alleviated by analysing subjects individually; (b) there is a close interaction between theory building and experimentation; and (c) computer technology is used to routinely test versions of the theory on a wide range of data. The advantages of this approach heavily outweigh the disadvantages

    What do connectionnist simulations tell us?

    Get PDF
    In his review, Rispoli’s main concern is that Elman et al.’s book will aggravate the degree of polarisation in developmental psycholinguistics. I cannot really comment on this worry, as developmental psycholinguistics is not my field. Instead, I will discuss some questions more related to my background--the role of computational modelling in Elman et al.’s approach

    Herbert Simon (1916-2001). The scientist of the artificial

    Get PDF
    With the disappearance of Herbert A. Simon, we have lost one of the most original thinkers of the 20th century. Highly influential in a number of scientific fields—some of which he actually helped create, such as artificial intelligence or information-processing psychology—Simon was a true polymath. His research started in management science and political science, later encompassed operations research, statistics and economics, and finally included computer science, artificial intelligence, psychology, education, philosophy of science, biology, and the sciences of design. His often controversial ideas earned him wide scientific recognition and essentially all the top awards of the fields in which he researched, including the Turing award from the Association of Computing Machinery, with Allen Newell, in 1975, the Nobel prize in economics, in 1978, and the Gold Medal Award for Psychological Science from the American Psychological Foundation, in 1988

    Attention mechanisms in the CHREST cognitive architecture

    Get PDF
    In this paper, we describe the attention mechanisms in CHREST, a computational architecture of human visual expertise. CHREST organises information acquired by direct experience from the world in the form of chunks. These chunks are searched for, and verified, by a unique set of heuristics, comprising the attention mechanism. We explain how the attention mechanism combines bottom-up and top-down heuristics from internal and external sources of information. We describe some experimental evidence demonstrating the correspondence of CHREST’s perceptual mechanisms with those of human subjects. Finally, we discuss how visual attention can play an important role in actions carried out by human experts in domains such as chess

    The role of constraints in expert memory

    Get PDF
    A great deal of research has been devoted to developing process models of expert memory. However, K. J. Vicente and J. H. Wang (1998) proposed (a) that process theories do not provide an adequate account of expert recall in domains in which memory recall is a contrived task and (b) that a product theory, the constraint attunement hypothesis (CAH), has received a significant amount of empirical support. We compared 1 process theory (the template theory; TT; F. Gobet & H. A. Simon, 1996c) with the CAH in chess. Chess players (N = 36) differing widely in skill levels were required to recall briefly presented chess positions that were randomized in various ways. Consistent with TT, but inconsistent with the CAH, there was a significant skill effect in a condition in which both the location and distribution of the pieces were randomized. These and other results suggest that process models such as TT can provide a viable account of expert memory in chess

    Applying multi-criteria optimisation to develop cognitive models

    Get PDF
    A scientific theory is developed by modelling empirical data in a range of domains. The goal of developing a theory is to optimise the fit of the theory to as many experimental settings as possible, whilst retaining some qualitative properties such as `parsimony' or `comprehensibility'. We formalise the task of developing theories of human cognition as a problem in multi-criteria optimisation. There are many challenges in this task, including the representation of competing theories, coordinating the fit with multiple experiments, and bringing together competing results to provide suitable theories. Experiments demonstrate the development of a theory of categorisation, using multiple optimisation criteria in genetic algorithms to locate pareto-optimal sets
    corecore