72 research outputs found
CHREST tutorial: Simulations of human learning
CHREST (Chunk Hierarchy and REtrieval STructures) is a comprehensive, computational model of human learning and perception. It has been used to successfully simulate data in a variety of domains, including: the acquisition of syntactic categories, expert behaviour, concept formation, implicit learning, and the acquisition of multiple representations in physics for problem solving. The aim of this tutorial is to provide participants with an introduction to CHREST, how it can be used to model various phenomena, and the knowledge to carry out their own modelling experiments
Recommended from our members
EPAM/CHREST tutorial: Fifty years of simulating human learning
Generating quantitative predictions for complex cognitive phenomena requires precise implementations of the underlying cognitive theory. This tutorial focuses on the EPAM/CHREST tradition, which has been providing signiïŹcant models of human behaviour for 50 years
Recommended from our members
The CHREST model of active perception and its role in problem solving
We discuss the relation of TEC to a computational model of expert perception, CHREST, based on the chunking theory. TECâs status as a verbal theory leaves several questions unanswerable, such as the precise nature of internal representations used, or the degree of learning required to obtain a particular level of competence: CHREST may help answer such questions
Recommended from our members
Discrimination nets, production systems and semantic networks: Elements of a unified framework
A number of formalisms have been used in cognitive science to account for cognition in general and learning in particular. While this variety denotes a healthy state of theoretical development, it somewhat hampers communication between researchers championing different approaches and makes comparison between theories difficult. In addition, it has the consequence that researchers tend to study cognitive phenomena best suited to their favorite formalism. It is therefore desirable to propose frameworks which span traditional formalisms.
In this paper, we pursue two goals: first, to show how three (symbolic) formalisms widely used in theorizing about and in simulating human cognitionâdiscrimination nets, semantic networks and production systemsâmay be used in a single, conceptually unified framework; and second to show how this framework can be used to develop a comprehensive theory of learning. Within this theory, learning is construed as (a) developing perceptual and conceptual discrimination nets, (b) adding semantic links, and (c) creating productions.
We start by giving a brief description of each of these formalisms; we then describe a theoretical framework that incorporates the three formalisms, and show how these may coexist. Throughout this description, examples from chess, a highly studied field of expertise and a classical object of study in cognitive science, will be provided. These examples will illustrate how the framework can be worked out into a more detailed cognitive theory. Finally, we draw some theoretical consequences of the framework proposed here
Recommended from our members
Modelling optional infinitive phenomena: A computational account of tense optionality in childrenâs speech
The Optional Infinitive hypothesis proposed by Wexler (1994) is a theory of childrenâs early grammatical development that can be used to explain a variety of phenomena in childrenâs early multi-word speech. However, Wexlerâs theory attributes a great deal of abstract knowledge to the child on the basis of rather weak empirical evidence. In this paper we present a computational model of early grammatical development which simulates Optional Infinitive phenomena as a function of the interaction between a performance-limited distributional analyser and the statistical properties of the input. Our findings undermine the claim that Optional Infinitive phenomena require an abstract grammatical analysis
Recommended from our members
Simple environments fail as illustrations of intelligence: A review of R. Pfeifer and C. Scheier
The field of cognitive science has always supported a variety of modes of research, often polarised into those seeking high-level explanations of intelligence and those seeking low-level, perhaps even neuro-physiological, explanations. Each of these research directions permits, at least in part, a similar methodology based around the construction of detailed computational models, which justify their explanatory claims by matching behavioural data. We are fortunate at this time to witness the culmination of several decades of work from each of these research directions, and hopefully to find within them the basic ideas behind a complete theory of human intelligence. It is in this spirit that Rolf Pfeifer and Christian Scheier have written their book Understanding Intelligence. However, their aim is manifestly not to present an overview of all prior work in this field, but instead to argue forcefully for one particular interpretation â a synthetic approach, based around the explicit construction of autonomous agents. This approach is characterised by the Embodiment Hypothesis, which is presented as a complete framework for investigating intelligence, and exemplified by a number of computational models and robots to illustrate just how the field of cognitive science might develop in the future. We first provide an overview of their book, before describing some of our reservations about its contribution towards an understanding of intelligence
Recommended from our members
Pattern recognition makes search possible: Comments on Holding (1992)
Chase and Simonâs (1973) chunking theory of expert memory, which emphasizes the role of pattern recognition in problem solving, has attracted much attention in cognitive psychology. Holding (1992) advanced a series of criticisms that, taken together, purported to refute the theory. Two valid criticismsâthat chunk size and LTM encoding were underestimatedâare dealt with by a simple extension of the theory (Gobet & Simon, 1996a). The remainder of Holdingâs criticisms either are not empirically founded or are based on a misunderstanding of the chunking theory and its role in a comprehensive theory of skill. Holdingâs alternative SEEK theory, which emphasizes the role of search, lacks key mechanisms that could be implemented by the type of pattern recognition proposed by Chase and Simon (1973)
Recommended from our members
Modeling childrenâs case marking errors with MOSAIC
We present a computational model of early grammatical development which simulates case-marking errors in childrenâs early multi-word speech as a function of the interaction between a performance-limited distributional analyser and the statistical properties of the input. The model is presented with a corpus of maternal speech from which it constructs a network consisting of nodes which represent words or sequences of words present in the input. It is sensitive to the distributional properties of items occurring in the input and is able to create âgenerativeâ links between words which occur frequently in similar contexts, building pseudo-categories. The only information received by the model is that present in the input corpus. After training, the model is able to produce child-like utterances, including case-marking errors, of which a proportion are rote-learned, but the majority are not present in the maternal corpus. The latter are generated by traversing the generative links formed between items in the network
Modelling the development of Dutch Optional Infinitives in MOSAIC.
This paper describes a computational model which simulates the change in the use of optional infinitives that is evident in children learning Dutch as their first language. The model, developed within the framework of MOSAIC, takes naturalistic, child directed speech as its input, and analyses the distributional regularities present in the input. It slowly learns to generate longer utterances as it sees more input. We show that the developmental characteristics of Dutch childrenâs speech (with respect to optional infinitives) are a natural consequence of MOSAICâs learning mechanisms and the gradual increase in the length of the utterances it produces. In contrast with Nativist approaches to syntax acquisition, the present model does not assume large amounts of innate knowledge in the child, and provides a quantitative process account of the development of optional infinitives
Recommended from our members
A computer model of chess memory
Chess research provides rich data for testing computational models of human memory. This paper presents a model which shares several common concepts with an earlier attempt (Simon & Gilmartin, 1973), but features several new attributes: dynamic short-term memory, recursive chunking, more sophisticated perceptual mechanisms and use of a retrieval structure (Chase & Ericsson, 1982). Simulations of data from three experiments are presented: 1) differential recall of random and game positions; 2) recall of several boards presented in short succession; 3) recall of positions modified by mirror image reflection about various axes. The model fits the data reasonably well, although some empirical phenomena are not captured by it. At a theoretical level, the conceptualization of the internal representation and its relation with the retrieval structure needs further refinement
- âŠ