4 research outputs found

    Visual Search Without Selective Attention: A Cognitive Architecture Account

    Full text link
    A key phenomenon in visual search experiments is the linear relation of reaction time (RT) to the number of objects to be searched (set size). The dominant theory of visual search claims that this is a result of covert selective attention operating sequentially to “bind” visual features into objects, and this mechanism operates differently depending on the nature of the search task and the visual features involved, causing the slope of the RT as a function of set size to range from zero to large values. However, a cognitive architectural model presented here shows these effects on RT in three different search task conditions can be easily obtained from basic visual mechanisms, eye movements, and simple task strategies. No selective attention mechanism is needed. In addition, there are little‐explored effects of visual crowding, which is typically confounded with set size in visual search experiments. Including a simple mechanism for crowding in the model also allows it to account for significant effects on error rate (ER). The resulting model shows the interaction between visual mechanisms and task strategy, and thus it represents a more comprehensive and fruitful approach to visual search than the dominant theory.Visual Search without Selective Attention calls into question the necessity of a covert selective attention mechanism by implementing a formal model that includes basic visual mechanisms, saccades, and simple task strategies. Across three search tasks, the model accounts for response times as well as the proportion of errors observed in human participants, including effects of item crowding in the visual stimulus.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/147754/1/tops12406.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/147754/2/tops12406_am.pd

    The persistent visual store as the locus of fixation memory in visual search tasks

    No full text
    Abstract Experiments on visual search have demonstrated the existence of a relatively large and reliable memory for which objects have been fixated; an indication of this memory is that revisits (fixations on previously fixated objects) typically comprise only about 5% of fixations. Any cognitive architecture that supports visual search must account for where such memory resides in the system and how it can be used to guide eye movements in visual search. This paper presents a simple solution for the EPIC architecture that is consistent with the overall requirements for modeling visually-intensive tasks and other visual memory phenomena

    Proceedings of the 11th international Conference on Cognitive Modeling : ICCM 2012

    Get PDF
    The International Conference on Cognitive Modeling (ICCM) is the premier conference for research on computational models and computation-based theories of human behavior. ICCM is a forum for presenting, discussing, and evaluating the complete spectrum of cognitive modeling approaches, including connectionism, symbolic modeling, dynamical systems, Bayesian modeling, and cognitive architectures. ICCM includes basic and applied research, across a wide variety of domains, ranging from low-level perception and attention to higher-level problem-solving and learning. Online-Version published by UniversitÀtsverlag der TU Berlin (www.univerlag.tu-berlin.de
    corecore