1,242,619 research outputs found

    Applying multi-criteria optimisation to develop cognitive models

    Get PDF
    A scientific theory is developed by modelling empirical data in a range of domains. The goal of developing a theory is to optimise the fit of the theory to as many experimental settings as possible, whilst retaining some qualitative properties such as `parsimony' or `comprehensibility'. We formalise the task of developing theories of human cognition as a problem in multi-criteria optimisation. There are many challenges in this task, including the representation of competing theories, coordinating the fit with multiple experiments, and bringing together competing results to provide suitable theories. Experiments demonstrate the development of a theory of categorisation, using multiple optimisation criteria in genetic algorithms to locate pareto-optimal sets

    Towards a model of expectation-driven perception

    Get PDF
    Human perception is an active process by which meaningful information is gathered from the external environment. Application areas such as human-computer interaction (HCI), or the role of human experts in image analysis, highlight the need to understand how humans, especially experts, use prior information when interpreting what they see. Here, we describe how CHREST, a model of expert perception, is currently being extended to support expectation-driven perception of bitmap-level image data, focusing particularly on its ability to learn semantic interpretations

    CHREST tutorial: Simulations of human learning

    Get PDF
    CHREST (Chunk Hierarchy and REtrieval STructures) is a comprehensive, computational model of human learning and perception. It has been used to successfully simulate data in a variety of domains, including: the acquisition of syntactic categories, expert behaviour, concept formation, implicit learning, and the acquisition of multiple representations in physics for problem solving. The aim of this tutorial is to provide participants with an introduction to CHREST, how it can be used to model various phenomena, and the knowledge to carry out their own modelling experiments

    Developing reproducible and comprehensible computational models

    Get PDF
    Quantitative predictions for complex scientific theories are often obtained by running simulations on computational models. In order for a theory to meet with wide-spread acceptance, it is important that the model be reproducible and comprehensible by independent researchers. However, the complexity of computational models can make the task of replication all but impossible. Previous authors have suggested that computer models should be developed using high-level specification languages or large amounts of documentation. We argue that neither suggestion is sufficient, as each deals with the prescriptive definition of the model, and does not aid in generalising the use of the model to new contexts. Instead, we argue that a computational model should be released as three components: (a) a well-documented implementation; (b) a set of tests illustrating each of the key processes within the model; and (c) a set of canonical results, for reproducing the model’s predictions in important experiments. The included tests and experiments would provide the concrete exemplars required for easier comprehension of the model, as well as a confirmation that independent implementations and later versions reproduce the theory’s canonical results

    A distributed framework for semi-automatically developing architectures of brain and mind

    Get PDF
    Developing comprehensive theories of low-level neuronal brain processes and high-level cognitive behaviours, as well as integrating them, is an ambitious challenge that requires new conceptual, computational, and empirical tools. Given the complexities of these theories, they will almost certainly be expressed as computational systems. Here, we propose to use recent developments in grid technology to develop a system of evolutionary scientific discovery, which will (a) enable empirical researchers to make their data widely available for use in developing and testing theories, and (b) enable theorists to semi-automatically develop computational theories. We illustrate these ideas with a case study taken from the domain of categorisation

    Multi-task learning and transfer: The effect of algorithm representation

    Get PDF
    Exploring multiple classes of learning algorithms for those algorithms which perform best in multiple tasks is a complex problem of multiple-criteria optimisation. We use a genetic algorithm to locate sets of models which are not outperformed on all of the tasks. The genetic algorithm develops a population of multiple types of learning algorithms, with competition between individuals of different types. We find that inherent differences in the convergence time and performance levels of the different algorithms leads to misleading population effects. We explore the role that the algorithm representation and initial population has on task performance. Our findings suggest that separating the representation of different algorithms is beneficial in enhancing performance. Also, initial seeding is required to avoid premature convergence to non-optimal classes of algorithms

    Simple environments fail as illustrations of intelligence: A review of R. Pfeifer and C. Scheier

    Get PDF
    The field of cognitive science has always supported a variety of modes of research, often polarised into those seeking high-level explanations of intelligence and those seeking low-level, perhaps even neuro-physiological, explanations. Each of these research directions permits, at least in part, a similar methodology based around the construction of detailed computational models, which justify their explanatory claims by matching behavioural data. We are fortunate at this time to witness the culmination of several decades of work from each of these research directions, and hopefully to find within them the basic ideas behind a complete theory of human intelligence. It is in this spirit that Rolf Pfeifer and Christian Scheier have written their book Understanding Intelligence. However, their aim is manifestly not to present an overview of all prior work in this field, but instead to argue forcefully for one particular interpretation – a synthetic approach, based around the explicit construction of autonomous agents. This approach is characterised by the Embodiment Hypothesis, which is presented as a complete framework for investigating intelligence, and exemplified by a number of computational models and robots to illustrate just how the field of cognitive science might develop in the future. We first provide an overview of their book, before describing some of our reservations about its contribution towards an understanding of intelligence

    EPAM/CHREST tutorial: Fifty years of simulating human learning

    Get PDF
    Generating quantitative predictions for complex cognitive phenomena requires precise implementations of the underlying cognitive theory. This tutorial focuses on the EPAM/CHREST tradition, which has been providing significant models of human behaviour for 50 years

    An investigation into the effect of ageing on expert memory with CHREST

    Get PDF
    CHREST is a cognitive architecture that models human perception, learning, memory, and problem solving, and which has successfully simulated numerous human experimental data on chess. In this paper, we describe an investigation into the effects of ageing on expert memory using CHREST. The results of the simulations are related to the literature on ageing. The study illustrates how Computational Intelligence can be used to understand complex phenomena that are affected by multiple variables dynamically evolving as a function of time and that have direct practical implications for human societies

    Learning perceptual schemas to avoid the utility problem

    Get PDF
    This paper describes principles for representing and organising planning knowledge in a machine learning architecture. One of the difficulties with learning about tasks requiring planning is the utility problem: as more knowledge is acquired by the learner, the utilisation of that knowledge takes on a complexity which overwhelms the mechanisms of the original task. This problem does not, however, occur with human learners: on the contrary, it is usually the case that, the more knowledgeable the learner, the greater the efficiency and accuracy in locating a solution. The reason for this lies in the types of knowledge acquired by the human learner and its organisation. We describe the basic representations which underlie the superior abilities of human experts, and describe algorithms for using equivalent representations in a machine learning architecture
    corecore