15 research outputs found

    The role of sensory uncertainty in simple contour integration

    Get PDF
    Perceptual organization is the process of grouping scene elements into whole entities. A classic example is contour integration, in which separate line segments are perceived as continuous contours. Uncertainty in such grouping arises from scene ambiguity and sensory noise. Some classic Gestalt principles of contour integration, and more broadly, of perceptual organization, have been re-framed in terms of Bayesian inference, whereby the observer computes the probability that the whole entity is present. Previous studies that proposed a Bayesian interpretation of perceptual organization, however, have ignored sensory uncertainty, despite the fact that accounting for the current level of perceptual uncertainty is one the main signatures of Bayesian decision making. Crucially, trial-by-trial manipulation of sensory uncertainty is a key test to whether humans perform near-optimal Bayesian inference in contour integration, as opposed to using some manifestly non-Bayesian heuristic. We distinguish between these hypotheses in a simplified form of contour integration, namely judging whether two line segments separated by an occluder are collinear. We manipulate sensory uncertainty by varying retinal eccentricity. A Bayes-optimal observer would take the level of sensory uncertainty into account-in a very specific way-in deciding whether a measured offset between the line segments is due to non-collinearity or to sensory noise. We find that people deviate slightly but systematically from Bayesian optimality, while still performing "probabilistic computation" in the sense that they take into account sensory uncertainty via a heuristic rule. Our work contributes to an understanding of the role of sensory uncertainty in higher-order perception. Author summary Our percept of the world is governed not only by the sensory information we have access to, but also by the way we interpret this information. When presented with a visual scene, our visual system undergoes a process of grouping visual elements together to form coherent entities so that we can interpret the scene more readily and meaningfully. For example, when looking at a pile of autumn leaves, one can still perceive and identify a whole leaf even when it is partially covered by another leaf. While Gestalt psychologists have long described perceptual organization with a set of qualitative laws, recent studies offered a statistically-optimal-Bayesian, in statistical jargon-interpretation of this process, whereby the observer chooses the scene configuration with the highest probability given the available sensory inputs. However, these studies drew their conclusions without considering a key actor in this kind of statistically-optimal computations, that is the role of sensory uncertainty. One can easily imagine that our decision on whether two contours belong to the same leaf or different leaves is likely going to change when we move from viewing the pile of leaves at a great distance (high sensory uncertainty), to viewing very closely (low sensory uncertainty). Our study examines whether and how people incorporate uncertainty into contour integration, an elementary form of perceptual organization, by varying sensory uncertainty from trial to trial in a simple contour integration task. We found that people indeed take into account sensory uncertainty, however in a way that subtly deviates from optimal behavior.Peer reviewe

    Visual working memory in immersive visualization: a change detection experiment and an image-computable model

    Get PDF
    Visual working memory (VWM) is a cognitive mechanism essential for interacting with the environment and accomplishing ongoing tasks, as it allows fast processing of visual inputs at the expense of the amount of information that can be stored. A better understanding of its functioning would be beneficial to research fields such as simulation and training in immersive Virtual Reality or information visualization and computer graphics. The current work focuses on the design and implementation of a paradigm for evaluating VWM in immersive visualization and of a novel image-based computational model for mimicking the human behavioral data of VWM. We evaluated the VWM at the variation of four conditions: set size, spatial layout, visual angle (VA) subtending stimuli presentation space, and observation time. We adopted a full factorial design and analysed participants' performances in the change detection experiment. The analysis of hit rates and false alarm rates confirms the existence of a limit of VWM capacity of around 7 & PLUSMN; 2 items, as found in the literature based on the use of 2D videos and images. Only VA and observation time influence performances (p<0.0001). Indeed, with VA enlargement, participants need more time to have a complete overview of the presented stimuli. Moreover, we show that our model has a high level of agreement with the human data, r>0.88 (p<0.05)

    Change Detection in Rhesus Monkeys and Humans

    Get PDF
    Visual working memory (VWM) is the temporary retention of visual information and a key component of cognitive processing. The classical paradigm for studying VWM and its encoding limitations has been change detection. Early work focused on how many items could be stored in VWM, leading to the popular theory that humans could remember no more than 4±1 items. More recently, proposals have suggested that VWM is a noisy, continuous resource distributed across virtually all items in the visual field, resulting in diminished memory quality rather than limited quantity. This debate about the nature of VWM has predominantly been studied with humans. Nevertheless, nonhuman species could add a great deal to the debate by providing evidence related to evolutionary continuity (similarities and/or differences) and model systems for investigating the neural basis of VWM. To this end, in the first aim, we tested monkeys and humans in virtually identical change detection tasks, where the subjects identified which memory item had changed between two displays. In addition to the typical manipulation of the number of items to-be-remembered (2-5 oriented bars), we varied the change magnitude (degree of orientation change) – a critical manipulation for discriminating among leading models of VWM encoding limitations. We found that in both species VWM performance was best accounted for by a model in which memory items are encoded in a noisy manner, where quality of memory is variable and on average decreases with increasing set size. The second aim focused on the decision-making component of change detection, where observers use noisy sensory information to make a judgment about where the change occurred. We tested monkeys and humans in the same change detection task (Aim 1), but with ellipses that varied in their height-to-width ratio so that their reliability of communicating orientation discrimination could be manipulated. The high-reliability ellipses were long and narrow, and the low-reliability ellipses were short and wide. We compared models that differed with respect to how the observers incorporate knowledge of stimulus reliability during decision-making. We found that in both species performance was best accounted for by a Bayesian model in which observers take into account the uncertainty of sensory observations when making perceptual judgments, giving more weight to more reliable evidence. The comparative results across these related primate species are suggestive of evolutionary continuity of basic VWM processing in primates generally. These findings provide a strong theoretical foundation for how VWM processes work and establish rhesus monkeys as a good animal model system for physiological investigations to elucidate the neural substrates of VWM processing

    Remembering Complex Objects in Visual Working Memory:Do Capacity Limits Restrict Objects or Features?

    Get PDF
    Visual working memory stores stimuli from our environment as representations that can be accessed by high-level control processes. This study addresses a longstanding debate in the literature about whether storage limits in visual working memory include a limit to the complexity of discrete items. We examined the issue with a number of change-detection experiments that used complex stimuli which possessed multiple features per stimulus item. We manipulated the number of relevant features of the stimulus objects in order to vary feature load. In all of our experiments, we found that increased feature load led to a reduction in change-detection accuracy. However, we found that feature load alone could not account for the results, but that a consideration of the number of relevant objects was also required. This study supports capacity limits for both feature and object storage in visual working memory

    Detecting and tracking populations at-risk of Alzheimer’s disease: studying asymptomatic cognitive profiles and symptomatic clinical presentations

    Get PDF
    Familial Alzheimer’s disease (FAD) is a penetrant autosomal dominantly inherited condition. Due to its clinical and neurophysiological similarities with sporadic AD features, it represents an important clinical group in its own right but also offers a potential model for AD. This thesis is largely based on the longitudinal FAD study but also includes data from ‘Insight 46’ in an attempt to broaden the scope of these investigations to other ‘at-risk’ cohorts. The overarching aim of the thesis is to study the early subtle cognitive changes (with a particular focus on visual short-term memory but also subjective cognitive decline) and the symptomatic presentations (both cognitive and clinical) that accompany disease progression in AD. The key findings were that over time, presymptomatic mutation carriers (PMCs) had a faster rate of decline in visual short-term memory (VSTM) function, specifically in the ability to remember the location and the target identity. This relational binding deficit was strongest in the most challenging task condition: 3-items, 4s delay (high load, longest delay), and is clinically relevant as it shows sensitivity in tracking individuals during preclinical AD stages. Consequent eye movement investigations of VSTM function, revealed a stronger cognitive effort for PMCs compared to controls during encoding, a finding which may increase the diagnostic value of relational binding tasks. Other important findings were: the higher incidence of subjective cognitive decline symptoms in two otherwise different populations “at-risk” of AD: PMCs carriers and amyloid-positive ~70-year-old participants and the ineffective VSTM function and much smaller influence of mutation specificity on survival time variance in comparison to variance in age at onset for symptomatic FAD individuals. Together, this work has implications for the interpretation of cognitive and clinical data, the understanding of heterogeneity in FAD and may help detect and track subtle cognitive decline of potential value to clinical practice

    LARGE-SCALE NEURAL NETWORK MODELING: FROM NEURONAL MICROCIRCUITS TO WHOLE-BRAIN COMPLEX NETWORK DYNAMICS

    Get PDF
    Neural networks mediate human cognitive functions, such as sensory processing, memory, attention, etc. Computational modeling has been proved as a powerful tool to test hypothesis of network mechanisms underlying cognitive functions, and to understand better human neuroimaging data. The dissertation presents a large-scale neural network modeling study of human brain visual/auditory processing and how this process interacts with memory and attention. We first modeled visual and auditory objects processing and short-term memory with local microcircuits and a large-scale recurrent network. We proposed a biologically realistic network implementation of storing multiple items in short-term memory. We then realized the effect that people involuntarily switch attention to salient distractors and are difficult to distract when attending to salient stimuli, by incorporating exogenous and endogenous attention modules. The integrated model could perform a number of cognitive tasks utilizing different cognitive functions by only changing a task-specification parameter. Based on the performance and simulated imaging results of these tasks, we proposed hypothesis for the neural mechanism beneath several important phenomena, which may be tested experimentally in the future. Theory of complex network has been applied in the analysis of neuroimaging data, as it provides a topological abstraction of the human brain. We constructed functional connectivity networks for various simulated experimental conditions. A number of important network properties were studied, including the scale-free property, the global efficiency, modular structure, and explored their relations with task complexity. We showed that these network properties and their dynamics of our simulated networks matched empirical studies, which verifies the validity and importance of our modeling work in testing neural network hypothesis

    The Prevalence of Correspondence Computations in Visual Processing

    Get PDF
    Correspondence computation is the general process during which received perceptual signals are assigned to the same or different sources. It is a pervasive process involved in many visual cognitive tasks. People need to make this computation both for simultaneously received signals and for temporally separated signals. Due to the noisy nature of visual input signals, it is an error-prone process. Therefore, it serves as the cause of many limits in human performance. However, its role in human cognitive abilities has often been ignored or underestimated. Besides, how correspondence computations are done, and how it is related to other computations in the visual system, haven’t been explored a lot. In this dissertation, I combined evidence from human behavioral experiment, computational modeling, and testing of brain-damaged patient to investigate these questions. In Chapter 2, I showed that human participants couldn’t accurately report the number of presented objects in a typical visual working memory task. I then showed that a clustering algorithm with noise in the correspondence process could simulate human performance very well. Therefore, parts of the limits in human memory ability could be explained by imperfect correspondence computations. In Chapter 3, I explored how different correspondence computation algorithms are combined to solve the problem of motion direction judgment. I found that a lower- level luminance transient detection system and a higher-level position comparison system work complementary to each other. The relative contribution of each system depends on the signal strength of that system. In Chapter 4, I showed that limit in object tracking ability is a result of noisy visual inputs, suboptimal eye-movement strategy, and probabilistic correspondence computations. No external resource-like limits are needed to understand human’s limited capacity in tracking multiple objects at the same time. In sum, these results suggest that correspondence computations play important roles in visual cognition. It is pervasive, and a failure in this process could lead to further failures in related tasks. Human visual cognition should be understood in terms of the involved computations and the possible errors could arise during these computations

    Structured representations in visual cognition

    Get PDF
    corecore