314 research outputs found

    Noise in neural populations accounts for errors in working memory.

    Get PDF
    Errors in short-term memory increase with the quantity of information stored, limiting the complexity of cognition and behavior. In visual memory, attempts to account for errors in terms of allocation of a limited pool of working memory resources have met with some success, but the biological basis for this cognitive architecture is unclear. An alternative perspective attributes recall errors to noise in tuned populations of neurons that encode stimulus features in spiking activity. I show that errors associated with decreasing signal strength in probabilistically spiking neurons reproduce the pattern of failures in human recall under increasing memory load. In particular, deviations from the normal distribution that are characteristic of working memory errors and have been attributed previously to guesses or variability in precision are shown to arise as a natural consequence of decoding populations of tuned neurons. Observers possess fine control over memory representations and prioritize accurate storage of behaviorally relevant information, at a cost to lower priority stimuli. I show that changing the input drive to neurons encoding a prioritized stimulus biases population activity in a manner that reproduces this empirical tradeoff in memory precision. In a task in which predictive cues indicate stimuli most probable for test, human observers use the cues in an optimal manner to maximize performance, within the constraints imposed by neural noise

    Distinct neural mechanisms underlie the success, precision, and vividness of episodic memory

    Get PDF
    A network of brain regions have been linked with episodic memory retrieval, but limited progress has been made in identifying the contributions of distinct parts of the network. Here, we utilized continuous measures of retrieval to dissociate three components of episodic memory: retrieval success, precision, and vividness. In the fMRI scanner, participants encoded objects that varied continuously on three features: color, orientation, and location. Participants' memory was tested by having them recreate the appearance of the object features using a continuous dial, and continuous vividness judgments were recorded. Retrieval success, precision, and vividness were dissociable both behaviorally and neurally: successful versus unsuccessful retrieval was associated with hippocampal activity, retrieval precision scaled with activity in the angular gyrus, and vividness judgments tracked activity in the precuneus. The ability to dissociate these components of episodic memory reveals the benefit afforded by measuring memory on a continuous scale, allowing functional parcellation of the retrieval network.James S McDonnell Foundation Scholar Award, Medical Research Council, Wellcome Trust, Economic and Social Research Counci

    Dynamic Updating of Working Memory Resources for Visual Objects

    Get PDF
    Recent neurophysiological and imaging studies have investigated how neural representations underlying working memory (WM) are dynamically updated for objects presented sequentially. Although such studies implicate information encoded in oscillatory activity across distributed brain networks, interpretation of findings depends crucially on the underlying conceptual model of how memory resources are distributed.Here, we quantify the fidelity of human memory for sequences of colored stimuli of different orientation. The precision with which each orientation was recalled declined with increases in total memory load, but also depended on when in the sequence it appeared. When one item was prioritized, its recall was enhanced, but with corresponding decrements in precision for other objects. Comparison with the same number of items presented simultaneously revealed an additional performance cost for sequential display that could not be explained by temporal decay. Memory precision was lower for sequential compared with simultaneous presentation, even when each item in the sequence was presented at a different location.Importantly, stochastic modeling established this cost for sequential display was due to misbinding object features (color and orientation). These results support the view that WM resources can be dynamically and flexibly updated as new items have to be stored, but redistribution of resources with the addition of new items is associated with misbinding object features, providing important constraints and a framework for interpreting neural data

    Fidelity of the representation of value in decision-making

    Get PDF
    The ability to make optimal decisions depends on evaluating the expected rewards associated with different potential actions. This process is critically dependent on the fidelity with which reward value information can be maintained in the nervous system. Here we directly probe the fidelity of value representation following a standard reinforcement learning task. The results demonstrate a previously-unrecognized bias in the representation of value: extreme reward values, both low and high, are stored significantly more accurately and precisely than intermediate rewards. The symmetry between low and high rewards pertained despite substantially higher frequency of exposure to high rewards, resulting from preferential exploitation of more rewarding options. The observed variation in fidelity of value representation retrospectively predicted performance on the reinforcement learning task, demonstrating that the bias in representation has an impact on decision-making. A second experiment in which one or other extreme-valued option was omitted from the learning sequence showed that representational fidelity is primarily determined by the relative position of an encoded value on the scale of rewards experienced during learning. Both variability and guessing decreased with the reduction in the number of options, consistent with allocation of a limited representational resource. These findings have implications for existing models of reward-based learning, which typically assume defectless representation of reward value.This research was supported by the Wellcome Trust (grant number 106926 to PMB)

    Neural Architecture for Feature Binding in Visual Working Memory

    Get PDF
    Binding refers to the operation that groups different features together into objects. We propose a neural architecture for feature binding in visual working memory that employs populations of neurons with conjunction responses. We tested this model using cued recall tasks, in which subjects had to memorize object arrays composed of simple visual features (color, orientation, and location). After a brief delay, one feature of one item was given as a cue, and the observer had to report, on a continuous scale, one or two other features of the cued item. Binding failure in this task is associated with swap errors, in which observers report an item other than the one indicated by the cue. We observed that the probability of swapping two items strongly correlated with the items' similarity in the cue feature dimension, and found a strong correlation between swap errors occurring in spatial and nonspatial report. The neural model explains both swap errors and response variability as results of decoding noisy neural activity, and can account for the behavioral results in quantitative detail. We then used the model to compare alternative mechanisms for binding nonspatial features. We found the behavioral results fully consistent with a model in which nonspatial features are bound exclusively via their shared location, with no indication of direct binding between color and orientation. These results provide evidence for a special role of location in feature binding, and the model explains how this special role could be realized in the neural system.This work was supported by the Wellcome Trust

    Automatic and intentional influences on saccade landing

    Get PDF
    Saccadic eye movements enable us to rapidly direct our high-resolution fovea onto relevant parts of the visual world. However, while we can intentionally select a location as a saccade target, the wider visual scene also influences our executed movements. In the presence of multiple objects, eye movements may be "captured" to the location of a distractor object, or be biased toward the intermediate position between objects (the "global effect"). Here we examined how the relative strengths of the global effect and visual object capture changed with saccade latency, the separation between visual items and stimulus contrast. Importantly, while many previous studies have omitted giving observers explicit instructions, we instructed participants to either saccade to a specified target object or to the midpoint between two stimuli. This allowed us to examine how their explicit movement goal influenced the likelihood that their saccades terminated at either the target, distractor, or intermediate locations. Using a probabilistic mixture model, we found evidence that both visual object capture and the global effect co-occurred at short latencies and declined as latency increased. As object separation increased, capture came to dominate the landing positions of fast saccades, with reduced global effect. Using the mixture model fits, we dissociated the proportion of unavoidably captured saccades to each location from those intentionally directed to the task goal. From this we could extract the time course of competition between automatic capture and intentional targeting. We show that task instructions substantially altered the distribution of saccade landing points, even at the shortest latencies. NEW & NOTEWORTHY: When making an eye movement to a target location, the presence of a nearby distractor can cause the saccade to unintentionally terminate at the distractor itself or the average position in between stimuli. With probabilistic mixture models, we quantified how both unavoidable capture and goal-directed targeting were influenced by changing the task and the target-distractor separation. Using this novel technique, we could extract the time course over which automatic and intentional processes compete for control of saccades.This work was supported by the Wellcome Trust

    Modulation of somatosensory processing by action.

    Get PDF
    Psychophysical evidence suggests that sensations arising from our own movements are diminished when predicted by motor forward models and that these models may also encode the timing and intensity of movement. Here we report a functional magnetic resonance imaging study in which the effects on sensation of varying the occurrence, timing and force of movements were measured. We observed that tactile-related activity in a region of secondary somatosensory cortex is reduced when sensation is associated with movement and further that this reduction is maximal when movement and sensation occur synchronously. Motor force is not represented in the degree of attenuation but rather in the magnitude of this region's response. These findings provide neurophysiological correlates of previously-observed behavioural forward-model phenomena, and advocate the adopted approach for the study of clinical conditions in which forward-model deficits have been posited to play a crucial role

    Contralateral manual compensation for velocity-dependent force perturbations

    Get PDF
    It is not yet clear how the temporal structure of a voluntary action is coded allowing coordinated bimanual responses. This study focuses on the adaptation to and compensation for a force profile presented to one stationary arm which is proportional to the velocity of the other moving arm. We hypothesised that subjects would exhibit predictive coordinative responses which would co-vary with the state of the moving arm. Our null hypothesis is that they develop a time-dependent template of forces appropriate to compensate for the imposed perturbation. Subjects were trained to make 500 ms duration reaching movements with their dominant right arm to a visual target. A force generated with a robotic arm that was proportional to the velocity of the moving arm and perpendicular to movement direction acted on their stationary left hand, either at the same time as the movement or delayed by 250 or 500 ms. Subjects rapidly learnt to minimise the final end-point error. In the delay conditions, the left hand moved in advance of the onset of the perturbing force. In test conditions with faster or slower movement of the right hand, the predictive actions of the left hand co-varied with movement speed. Compensation for movement-related forces appeared to be predictive but not based on an accurate force profile that was equal and opposite to the imposed perturbatio
    • …
    corecore