24,178 research outputs found
Developing and Evaluating Cognitive Architectures with Behavioural Tests
http://www.aaai.org/Press/Reports/Workshops/ws-07-04.phpWe present a methodology for developing and evaluating cognitive architectures based on behavioural tests and suitable optimisation algorithms. Behavioural tests are used to clarify those aspects of an architecture's implementation which are critical to that theory. By fitting the performance of the architecture to observed behaviour, values for the architecture's parameters can be automatically obtained, and information can be derived about how components of the architecture relate to performance. Finally, with an appropriate optimisation algorithm, different cognitive architectures can be evaluated, and their performances compared on multiple tasks.Peer reviewe
Applying multi-criteria optimisation to develop cognitive models
A scientific theory is developed by modelling empirical data in a range of domains. The goal of developing a theory is to optimise the fit of the theory to as many experimental settings as possible, whilst retaining some qualitative properties such as `parsimony' or `comprehensibility'. We formalise the task of developing theories of human cognition as a problem in multi-criteria optimisation. There are many challenges in this task, including the representation of competing theories, coordinating the fit with multiple experiments, and bringing together competing results to provide suitable theories. Experiments demonstrate the development of a theory of categorisation, using multiple optimisation criteria in genetic algorithms to locate pareto-optimal sets
Recommended from our members
Towards a model of expectation-driven perception
Human perception is an active process by which
meaningful information is gathered from the
external environment. Application areas such as
human-computer interaction (HCI), or the role of
human experts in image analysis, highlight the
need to understand how humans, especially experts,
use prior information when interpreting what they
see. Here, we describe how CHREST, a model of expert
perception, is currently being extended to support
expectation-driven perception of bitmap-level image
data, focusing particularly on its ability to learn
semantic interpretations
The CHREST architecture of cognition : the role of perception in general intelligence
Original paper can be found at: http://www.atlantis-press.com/publications/aisr/AGI-10/ Copyright Atlantis Press. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits non-commercial use, distribution and reproduction in any medium, provided the original work is properly cited.This paper argues that the CHREST architecture of cognition can shed important light on developing artificial general intelligence. The key theme is that "cognition is perception." The description of the main components and mechanisms of the architecture is followed by a discussion of several domains where CHREST has already been successfully applied, such as the psychology of expert behaviour, the acquisition of language by children, and the learning of multiple representations in physics. The characteristics of CHREST that enable it to account for empirical data include: self-organisation, an emphasis on cognitive limitations, the presence of a perception-learning cycle, and the use of naturalistic data as input for learning. We argue that some of these characteristics can help shed light on the hard questions facing theorists developing artificial general intelligence, such as intuition, the acquisition and use of concepts and the role of embodiment
Recommended from our members
EPAM/CHREST tutorial: Fifty years of simulating human learning
Generating quantitative predictions for complex cognitive phenomena requires precise implementations of the underlying cognitive theory. This tutorial focuses on the EPAM/CHREST tradition, which has been providing signiļ¬cant models of human behaviour for 50 years
A Methodology for Developing Computational Implementations of Scientific Theories
āThis material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." āCopyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.āComputer programs have become a popular representation for scientific theories, particularly for implementing models or simulations of observed phenomena. Expressing a theory as an executable computer program provides many benefits, including: making all processes concrete, supporting the development of specific models, and hence enabling quantitative predictions to be derived from the theory. However, as implementations of scientific theories, these computer programs will be subject to change and modification. As programs change, their behaviour will also change, and ensuring continuity in the scientific value of the program is difficult. We propose a methodology for developing computer software implementing scientific theories. This methodology allows the developer to continuously change and extend their software, whilst alerting the developer to any changes in its scientific interpretation. We introduce tools for managing this development process, as well as for optimising the developed models
CHREST tutorial: Simulations of human learning
CHREST (Chunk Hierarchy and REtrieval STructures) is a comprehensive, computational model of human learning and perception. It has been used to successfully simulate data in a variety of domains, including: the acquisition of syntactic categories, expert behaviour, concept formation, implicit learning, and the acquisition of multiple representations in physics for problem solving. The aim of this tutorial is to provide participants with an introduction to CHREST, how it can be used to model various phenomena, and the knowledge to carry out their own modelling experiments
Developing reproducible and comprehensible computational models
Quantitative predictions for complex scientific theories are often obtained by running simulations on computational models. In order for a theory to meet with wide-spread acceptance, it is important that the model be reproducible and comprehensible by independent researchers. However, the complexity of computational models can make the task of replication all but impossible. Previous authors have suggested that computer models should be developed using high-level specification languages or large amounts of documentation. We argue that neither suggestion is sufficient, as each deals with the prescriptive definition of the model, and does not aid in generalising the use of the model to
new contexts. Instead, we argue that a computational model should be released as three components: (a) a well-documented implementation; (b) a set of tests illustrating each of the key processes within the model; and (c) a set of canonical results, for reproducing the modelās predictions in important experiments. The included tests and experiments would provide the concrete exemplars required for easier comprehension of the model, as well as a confirmation that independent implementations and
later versions reproduce the theoryās canonical results
A distributed framework for semi-automatically developing architectures of brain and mind
Developing comprehensive theories of low-level neuronal brain processes and high-level cognitive behaviours, as well as integrating them, is an ambitious challenge that requires new conceptual, computational, and empirical tools. Given the complexities of these theories, they will almost certainly be expressed as computational systems. Here, we propose to use recent developments in grid technology to develop a system of evolutionary scientific discovery, which will (a) enable empirical researchers to make their data widely available for use in developing and testing theories, and (b) enable theorists to semi-automatically develop computational theories. We illustrate these ideas with a case study taken from the domain of categorisation
Multi-task learning and transfer: The effect of algorithm representation
Exploring multiple classes of learning algorithms for those algorithms which perform best in multiple tasks is a complex problem of multiple-criteria optimisation. We use a genetic algorithm to locate sets of models which are not outperformed on all of the tasks. The genetic algorithm develops a population of multiple types of learning algorithms, with competition between individuals of different types. We find that inherent differences in the convergence time and performance levels of the different algorithms leads to misleading population effects. We explore the role that the algorithm representation and initial population has on task performance. Our findings suggest that separating the representation of different algorithms is beneficial in enhancing performance. Also, initial seeding is required to avoid premature convergence to non-optimal classes of algorithms
- ā¦