71 research outputs found

    Evidence for parallel consolidation of motion direction and orientation into visual short-term memory

    No full text
    Recent findings have indicated the capacity to consolidate multiple items into visual short-term memory in parallel varies as a function of the type of information. That is, while color can be consolidated in parallel, evidence suggests that orientation cannot. Here we investigated the capacity to consolidate multiple motion directions in parallel and reexamined this capacity using orientation. This was achieved by determining the shortest exposure duration necessary to consolidate a single item, then examining whether two items, presented simultaneously, could be consolidated in that time. The results show that parallel consolidation of direction and orientation information is possible, and that parallel consolidation of direction appears to be limited to two. Additionally, we demonstrate the importance of adequate separation between feature intervals used to define items when attempting to consolidate in parallel, suggesting that when multiple items are consolidated in parallel, as opposed to serially, the resolution of representations suffer. Finally, we used facilitation of spatial attention to show that the deterioration of item resolution occurs during parallel consolidation, as opposed to storage.This work was supported by an Australian Postgraduate Award to R. R., an NHMRC Early Career Fellowship (1054726) to D. A., and an Australian research Council Grant (DP110104553) to M. E

    The cost of parallel consolidation into visual working memory

    Get PDF
    A growing body of evidence indicates that information can be consolidated into visual working memory in parallel. Initially, it was suggested that color information could be consolidated in parallel while orientation was strictly limited to serial consolidation (Liu & Becker, 2013). However, we recently found evidence suggesting that both orientation and motion direction items can be consolidated in parallel, with different levels of accuracy (Rideaux, Apthorp, & Edwards, 2015). Here we examine whether there is a cost associated with parallel consolidation of orientation and direction information by comparing performance, in terms of precision and guess rate, on a target recall task where items are presented either sequentially or simultaneously. The results compellingly indicate that motion direction can be consolidated in parallel, but the evidence for orientation is less conclusive. Further, we find that there is a twofold cost associated with parallel consolidation of direction: Both the probability of failing to consolidate one (or both) item/s increases and the precision at which representations are encoded is reduced. Additionally, we find evidence indicating that the increased consolidation failure may be due to interference between items presented simultaneously, and is moderated by item similarity. These findings suggest that a biased competition model may explain differences in parallel consolidation between features

    Adaptation to binocular anticorrelation results in increased neural excitability

    Get PDF
    Throughout the brain, information from individual sources converges onto higher order neurons. For example, information from the two eyes first converges in binocular neurons in area V1. Some neurons appear tuned to similarities between sources of information, which makes intuitive sense in a system striving to match multiple sensory signals to a single external cause, i.e., establish causal inference. However, there are also neurons that are tuned to dissimilar information. In particular, some binocular neurons respond maximally to a dark feature in one eye and a light feature in the other. Despite compelling neurophysiological and behavioural evidence supporting the existence of these neurons (Cumming & Parker, 1997; Janssen, Vogels, Liu, & Orban, 2003; Katyal, Vergeer, He, He, & Engel, 2018; Kingdom, Jennings, & Georgeson, 2018; Tsao, Conway, & Livingstone, 2003), their function has remained opaque. To determine how neural mechanisms tuned to dissimilarities support perception, here we use electroencephalography to measure human observers’ steady-state visually evoked potentials (SSVEPs) in response to change in depth after prolonged viewing of anticorrelated and correlated random-dot stereograms (RDS). We find that adaptation to anticorrelated RDS results in larger SSVEPs, while adaptation to correlated RDS has no effect. These results are consistent with recent theoretical work suggesting ‘what not’ neurons play a suppressive role in supporting stereopsis (Goncalves & Welchman, 2017); that is, selective adaptation of neurons tuned to binocular mismatches reduces suppression resulting in increased neural excitability.This work was supported by the Leverhulme Trust (ECF-2017-573 to R. R.), the Isaac Newton Trust (17.08(o) to R. R.), and the Wellcome Trust (095183/Z/10/Z to A. E. W. and 206495/Z/17/Z to E. M.)

    Temporal synchrony is an effective cue for grouping and segmentation in the absence of form cues

    Get PDF
    The synchronous change of a feature across multiple discrete elements, i.e., temporal synchrony, has been shown to be a powerful cue for grouping and segmentation. This has been demonstrated with both static and dynamic stimuli for a range of tasks. However, in addition to temporal synchrony, stimuli in previous research have included other cues which can also facilitate grouping and segmentation, such as good continuation and coherent spatial configuration. To evaluate the effectiveness of temporal synchrony for grouping and segmentation in isolation, here we measure signal detection thresholds using a global-Gabor stimulus in the presence/absence of a synchronous event. We also examine the impact of the spatial proximity of the to-begrouped elements on the effectiveness of temporal synchrony, and the duration for which elements are bound together following a synchronous event in the absence of further segmentation cues. The results show that temporal synchrony (in isolation) is an effective cue for grouping local elements together to extract a global signal. Further, we find that the effectiveness of temporal synchrony as a cue for segmentation is modulated by the spatial proximity of signal elements. Finally, we demonstrate that following a synchronous event, elements are perceptually bound together for an average duration of 200 ms

    The cost of parallel processing in the human visual system

    No full text
    Our environment is visually rich, containing a multitude of objects that can be defined by many different features, e.g. shape, colour, and motion. To navigate and interact with the environment, we must process this information efficiently. The human visual system can process information either serially or in parallel. While there is a clear timesaving benefit of parallel processing, its cost is less well understood. Consequently, the aim of this thesis is to address three key theoretical questions underlying the cost of parallel processing. The first aim was to determine how the capacity of parallel processing varies as a function of the detail of information extraction. Previous research has demonstrated that brief presentations of five and six motion signals can be differentiated; this suggests that up to five signals can be simultaneously processed. However, it is unclear how much information is being extracted, i.e. whether observers are extracting direction information from all five signals. To examine this we presented observers with multiple moving objects and evaluated their parallel processing capacity as a function of the information required to perform the task. We found that the resolution of parallel motion processing varies as a function of the information that is extracted; specifically, as information extraction becomes more detailed, the capacity to process multiple signals is reduced. The second aim was to investigate whether there is a cost to the fidelity of information that is processed in parallel. Previous research suggests that there may not be a cost associated with parallel consolidation of information from sensory to visual shortterm memory (VSTM). Here we examined this by first determining that motion direction, and possibly orientation, can be consolidated in parallel, then explicitly evaluating the cost to the fidelity of information consolidated in parallel, compared to serially. We found that there is a twofold cost associated with parallel consolidation: a reduction in resolution of encoded items due to spreading of spatial attention, and an increase in the likelihood of consolidation failure due to interference between items. The third aim was to examine whether the cost associated with parallel processing can ultimately explain its capacity. We extended our previous findings regarding the cost associated with parallel consolidation to examine whether the capacity of parallel consolidation results from biased competition, the same mechanism proposed to account for spatial attention and VSTM storage, as evidenced from the interference between items presented simultaneously. This was achieved by demonstrating that parallel consolidation performance is influenced by factors predicted by a biased competition model. Furthermore, we found evidence suggesting that the capacity may be as high as three, with increasingly poorer resolution and higher consolidation failure-rates. Together, these results demonstrate that a) parallel processing is limited by the complexity of information to be processed, b) there is a twofold cost of processing information in parallel, and c) that increasing the amount of information processed in parallel also increases this cost to the fidelity of the information and ultimately leads to the capacity of this process

    How multisensory neurons solve causal inference.

    Get PDF
    Sitting in a static railway carriage can produce illusory self-motion if the train on an adjoining track moves off. While our visual system registers motion, vestibular signals indicate that we are stationary. The brain is faced with a difficult challenge: is there a single cause of sensations (I am moving) or two causes (I am static, another train is moving)? If a single cause, integrating signals produces a more precise estimate of self-motion, but if not, one cue should be ignored. In many cases, this process of causal inference works without error, but how does the brain achieve it? Electrophysiological recordings show that the macaque medial superior temporal area contains many neurons that encode combinations of vestibular and visual motion cues. Some respond best to vestibular and visual motion in the same direction ("congruent" neurons), while others prefer opposing directions ("opposite" neurons). Congruent neurons could underlie cue integration, but the function of opposite neurons remains a puzzle. Here, we seek to explain this computational arrangement by training a neural network model to solve causal inference for motion estimation. Like biological systems, the model develops congruent and opposite units and recapitulates known behavioral and neurophysiological observations. We show that all units (both congruent and opposite) contribute to motion estimation. Importantly, however, it is the balance between their activity that distinguishes whether visual and vestibular cues should be integrated or separated. This explains the computational purpose of puzzling neural representations and shows how a relatively simple feedforward network can solve causal inference
    • …
    corecore