42 research outputs found
No fixed item limit in visuospatial working memory.
Investigations of working memory capacity in the visual domain have converged on the concept of a limited supply of a representational medium, flexibly distributed between objects. Current debate centers on whether this medium is continuous, or quantized into 2 or 3 memory "slots". The latter model makes the strong prediction that, if an item in memory is probed, behavioral parameters will plateau when the number of items is the same or more than the number of slots. Here we examine short-term memory for object location using a two-dimensional pointing task. We show that recall variability for items in memory increases monotonically from 1 to 8 items. Using a novel method to isolate only those trials on which a participant correctly identifies the target, we show that response latency also increases monotonically from 1 to 8 items. We argue that both these findings are incompatible with a quantized model.Wellcome Trust (Grant ID: 106926)This is the final version of the article. It first appeared from Elsevier via http://dx.doi.org/10.1016/j.cortex.2016.07.02
Recommended from our members
Stochastic sampling provides a unifying account of visual working memory limits.
Research into human working memory limits has been shaped by the competition between different formal models, with a central point of contention being whether internal representations are continuous or discrete. Here we describe a sampling approach derived from principles of neural coding as a framework to understand working memory limits. Reconceptualizing existing models in these terms reveals strong commonalities between seemingly opposing accounts, but also allows us to identify specific points of difference. We show that the discrete versus continuous nature of sampling is not critical to model fits, but that, instead, random variability in sample counts is the key to reproducing human performance in both single- and whole-report tasks. A probabilistic limit on the number of items successfully retrieved is an emergent property of stochastic sampling, requiring no explicit mechanism to enforce it. These findings resolve discrepancies between previous accounts and establish a unified computational framework for working memory that is compatible with neural principles
Sensorimotor Learning Biases Choice Behavior: A Learning Neural Field Model for Decision Making
According to a prominent view of sensorimotor processing in primates, selection and specification of possible actions are not sequential operations. Rather, a decision for an action emerges from competition between different movement plans, which are specified and selected in parallel. For action choices which are based on ambiguous sensory input, the frontoparietal sensorimotor areas are considered part of the common underlying neural substrate for selection and specification of action. These areas have been shown capable of encoding alternative spatial motor goals in parallel during movement planning, and show signatures of competitive value-based selection among these goals. Since the same network is also involved in learning sensorimotor associations, competitive action selection (decision making) should not only be driven by the sensory evidence and expected reward in favor of either action, but also by the subject's learning history of different sensorimotor associations. Previous computational models of competitive neural decision making used predefined associations between sensory input and corresponding motor output. Such hard-wiring does not allow modeling of how decisions are influenced by sensorimotor learning or by changing reward contingencies. We present a dynamic neural field model which learns arbitrary sensorimotor associations with a reward-driven Hebbian learning algorithm. We show that the model accurately simulates the dynamics of action selection with different reward contingencies, as observed in monkey cortical recordings, and that it correctly predicted the pattern of choice errors in a control experiment. With our adaptive model we demonstrate how network plasticity, which is required for association learning and adaptation to new reward contingencies, can influence choice behavior. The field model provides an integrated and dynamic account for the operations of sensorimotor integration, working memory and action selection required for decision making in ambiguous choice situations
Drift in Neural Population Activity Causes Working Memory to Deteriorate Over Time.
Short-term memories are thought to be maintained in the form of sustained spiking activity in neural populations. Decreases in recall precision observed with increasing number of memorized items can be accounted for by a limit on total spiking activity, resulting in fewer spikes contributing to the representation of each individual item. Longer retention intervals likewise reduce recall precision, but it is unknown what changes in population activity produce this effect. One possibility is that spiking activity becomes attenuated over time, such that the same mechanism accounts for both effects of set size and retention duration. Alternatively, reduced performance may be caused by drift in the encoded value over time, without a decrease in overall spiking activity. Human participants of either sex performed a variable-delay cued recall task with a saccadic response, providing a precise measure of recall latency. Based on a spike integration model of decision making, if the effects of set size and retention duration are both caused by decreased spiking activity, we would predict a fixed relationship between recall precision and response latency across conditions. In contrast, the drift hypothesis predicts no systematic changes in latency with increasing delays. Our results show both an increase in latency with set size, and a decrease in response precision with longer delays within each set size, but no systematic increase in latency for increasing delay durations. These results were quantitatively reproduced by a model based on a limited neural resource in which working memories drift rather than decay with time.SIGNIFICANCE STATEMENT Rapid deterioration over seconds is a defining feature of short-term memory, but what mechanism drives this degradation of internal representations? Here, we extend a successful population coding model of working memory by introducing possible mechanisms of delay effects. We show that a decay in neural signal over time predicts that the time required for memory retrieval will increase with delay, whereas a random drift in the stored value predicts no effect of delay on retrieval time. Testing these predictions in a multi-item memory task with an eye movement response, we identified drift as a key mechanism of memory decline. These results provide evidence for a dynamic spiking basis for working memory, in contrast to recent proposals of activity-silent storage
Recommended from our members
Scene memory and spatial inhibition in visual search
Abstract: Any object-oriented action requires that the object be first brought into the attentional foreground, often through visual search. Outside the laboratory, this would always take place in the presence of a scene representation acquired from ongoing visual exploration. The interaction of scene memory with visual search is still not completely understood. Feature integration theory (FIT) has shaped both research on visual search, emphasizing the scaling of search times with set size when searches entail feature conjunctions, and research on visual working memory through the change detection paradigm. Despite its neural motivation, there is no consistently neural process account of FIT in both its dimensions. We propose such an account that integrates (1) visual exploration and the building of scene memory, (2) the attentional detection of visual transients and the extraction of search cues, and (3) visual search itself. The model uses dynamic field theory in which networks of neural dynamic populations supporting stable activation states are coupled to generate sequences of processing steps. The neural architecture accounts for basic findings in visual search and proposes a concrete mechanism for the integration of working memory into the search process. In a behavioral experiment, we address the long-standing question of whether both the overall speed and the efficiency of visual search can be improved by scene memory. We find both effects and provide model fits of the behavioral results. In a second experiment, we show that the increase in efficiency is fragile, and trace that fragility to the resetting of spatial working memory
Location-independent feature binding in visual working memory for sequentially presented objects
Abstract: Spatial location is believed to have a privileged role in binding features held in visual working memory. Supporting this view, Pertzov and Husain (Attention, Perception, & Psychophysics, 76(7), 1914–1924, 2014) reported that recall of bindings between visual features was selectively impaired when items were presented sequentially at the same location compared to sequentially at different locations. We replicated their experiment, but additionally tested whether the observed impairment could be explained by perceptual interference during encoding. Participants viewed four oriented bars in highly discriminable colors presented sequentially either at the same or different locations, and after a brief delay were cued with one color to reproduce the associated orientation. When we used the same timing as the original study, we reproduced its key finding of impaired binding memory in the same-location condition. Critically, however, this effect was significantly modulated by the duration of the inter-stimulus interval, and disappeared if memoranda were presented with longer delays between them. In a second experiment, we tested whether the effect generalized to other visual features, namely reporting of colors cued by stimulus shape. While we found performance deficits in the same-location condition, these did not selectively affect binding memory. We argue that the observed effects are best explained by encoding interference, and that memory for feature binding is not necessarily impaired when memoranda share the same location
Comparison of Depth Buffer Techniques for Large and Detailed 3D Scenes
Large scale 3D scenes in applications like space simulations are often subject to depth buffer related issues and visual artefacts like Z-fighting and spatial jittering. These issues are primarily a result of indistinguishable depth buffer values. To mitigate these issues, many techniques have been developed over time to better distribute depth values over the clipping range. These techniques range from simple adjustments of the projection matrix to complex solutions like multistage rendering with layered depth buffers. This work presents, compares and evaluates commonly used approaches found in
iterature and real world applications. An experiment is set up to compare the presented depth buffer techniques using the metric of minimum triangle separation (MTS). The gathered results are presented and evaluated, to give a good overview on which techniques are well suited for the use in applications with large scale 3D scenes
Recommended from our members
Scene memory and spatial inhibition in visual search : A neural dynamic process model and new experimental evidence.
Any object-oriented action requires that the object be first brought into the attentional foreground, often through visual search. Outside the laboratory, this would always take place in the presence of a scene representation acquired from ongoing visual exploration. The interaction of scene memory with visual search is still not completely understood. Feature integration theory (FIT) has shaped both research on visual search, emphasizing the scaling of search times with set size when searches entail feature conjunctions, and research on visual working memory through the change detection paradigm. Despite its neural motivation, there is no consistently neural process account of FIT in both its dimensions. We propose such an account that integrates (1) visual exploration and the building of scene memory, (2) the attentional detection of visual transients and the extraction of search cues, and (3) visual search itself. The model uses dynamic field theory in which networks of neural dynamic populations supporting stable activation states are coupled to generate sequences of processing steps. The neural architecture accounts for basic findings in visual search and proposes a concrete mechanism for the integration of working memory into the search process. In a behavioral experiment, we address the long-standing question of whether both the overall speed and the efficiency of visual search can be improved by scene memory. We find both effects and provide model fits of the behavioral results. In a second experiment, we show that the increase in efficiency is fragile, and trace that fragility to the resetting of spatial working memory