9,772 research outputs found
The role of falsification in the development of cognitive architectures: insights from a Lakatosian analysis
It has been suggested that the enterprise of developing mechanistic theories of the human cognitive architecture is flawed because the theories produced are not directly falsifiable. Newell attempted to sidestep this criticism by arguing for a Lakatosian model of scientific progress in which cognitive architectures should be understood as theories that develop over time. However, Newell’s own candidate cognitive architecture adhered only loosely to Lakatosian principles. This paper reconsiders the role of falsification and the potential utility of Lakatosian principles in the development of cognitive architectures. It is argued that a lack of direct falsifiability need not undermine the scientific development of a cognitive architecture if broadly Lakatosian principles are adopted. Moreover, it is demonstrated that the Lakatosian concepts of positive and negative heuristics for theory development and of general heuristic power offer methods for guiding the development of an architecture and for evaluating the contribution and potential of an architecture’s research program
The Timing of the Cognitive Cycle
We propose that human cognition consists of cascading cycles of recurring brain
events. Each cognitive cycle senses the current situation, interprets it with
reference to ongoing goals, and then selects an internal or external action in
response. While most aspects of the cognitive cycle are unconscious, each cycle
also yields a momentary “ignition” of conscious broadcasting.
Neuroscientists have independently proposed ideas similar to the cognitive
cycle, the fundamental hypothesis of the LIDA model of cognition. High-level
cognition, such as deliberation, planning, etc., is typically enabled by
multiple cognitive cycles. In this paper we describe a timing model LIDA's
cognitive cycle. Based on empirical and simulation data we propose that an
initial phase of perception (stimulus recognition) occurs 80–100 ms from
stimulus onset under optimal conditions. It is followed by a conscious episode
(broadcast) 200–280 ms after stimulus onset, and an action selection phase
60–110 ms from the start of the conscious phase. One cognitive cycle would
therefore take 260–390 ms. The LIDA timing model is consistent with brain
evidence indicating a fundamental role for a theta-gamma wave, spreading forward
from sensory cortices to rostral corticothalamic regions. This posteriofrontal
theta-gamma wave may be experienced as a conscious perceptual event starting at
200–280 ms post stimulus. The action selection component of the cycle is
proposed to involve frontal, striatal and cerebellar regions. Thus the cycle is
inherently recurrent, as the anatomy of the thalamocortical system suggests. The
LIDA model fits a large body of cognitive and neuroscientific evidence. Finally,
we describe two LIDA-based software agents: the LIDA Reaction Time agent that
simulates human performance in a simple reaction time task, and the LIDA Allport
agent which models phenomenal simultaneity within timeframes comparable to human
subjects. While there are many models of reaction time performance, these
results fall naturally out of a biologically and computationally plausible
cognitive architecture
Explicit learning in ACT-R
A popular distinction in the learning literature is the distinction between implicit and explicit learning. Although many studies elaborate on the nature of implicit learning, little attention is left for explicit learning. The unintentional aspect of implicit learning corresponds well to the mechanistic view of learning employed in architectures of cognition. But how to account for deliberate, intentional, explicit learning? This chapter argues that explicit learning can be explained by strategies that exploit implicit learning mechanisms. This idea is explored and modelled using the ACT-R theory (Anderson, 1993). An explicit strategy for learning facts in ACT-Rs declarative memory is rehearsal, a strategy that uses ACT-Rs activation learning mechanisms to gain deliberate control over what is learned. In the same sense, strategies for explicit procedural learning are proposed. Procedural learning in ACT-R involves generalisation of examples. Explicit learning rules can create and manipulate these examples. An example of these explicit rules will be discussed. These rules are general enough to be able to model the learning of three different tasks. Furthermore, the last of these models can explain the difference between adults and children in the discrimination-shift task
Progress towards Automated Human Factors Evaluation
Cao, S. (2015). Progress towards Automated Human Factors Evaluation. 6th International Conference on Applied Human Factors and Ergonomics (AHFE 2015) and the Affiliated Conferences, AHFE 2015, 3, 4266–4272. https://doi.org/10.1016/j.promfg.2015.07.414
This work is made available through a CC-BY-NC-ND 4.0 license. The licensor is not represented as endorsing the use made of this work. https://creativecommons.org/licenses/by-nc-nd/4.0/Human factors tests are important components of systems design. Designers need to evaluate users’ performance and workload while using a system and compare different design options to determine the optimal design choice. Currently, human factors evaluation and tests mainly rely on empirical user studies, which add a heavy cost to the design process. In addition, it is difficult to conduct comprehensive user tests at early design stages when no physical interfaces have been implemented. To address these issues, I develop computational human performance modeling techniques that can simulate users’ interaction with machine systems. This method uses a general cognitive architecture to computationally represent human cognitive capabilities and constraints. Task-specific models can be built with the specifications of user knowledge, user strategies, and user group differences. The simulation results include performance measures such as task completion time and error rate as well as workload measures. Completed studies have modeled multitasking scenarios in a wide range of domains, including transportation, healthcare, and human-computer interaction. The success of these studies demonstrated the modeling capabilities of this method. Cognitive-architecture-based models are useful, but building a cognitive model itself can be difficult to learn and master. It usually requires at least medium-level programming skills to understand and use the language and syntaxes that specify the task. For example, to build a model that simulates a driving task, a modeler needs to build a driving simulation environment so that the model can interact with the simulated vehicle. In order to simply this process, I have conducted preliminary programming work that directly connects the mental model to existing task environment simulation programs. The model will be able to directly obtain perceptual information from the task program and send control commands to the task program. With cognitive model-based tools, designers will be able to see the model performing the tasks in real-time and obtain a report of the evaluation. Automated human factors evaluation methods have tremendous value to support systems design and evaluatio
- …