13 research outputs found

    Attention model of binocular rivalry

    Get PDF
    This is the final version of the article. Available from National Academy of Sciences from the DOI in this record.When the corresponding retinal locations in the two eyes are presented with incompatible images, a stable percept gives way to perceptual alternations in which the two images compete for perceptual dominance. As perceptual experience evolves dynamically under constant external inputs, binocular rivalry has been used for studying intrinsic cortical computations and for understanding how the brain regulates competing inputs. Converging behavioral and EEG results have shown that binocular rivalry and attention are intertwined: binocular rivalry ceases when attention is diverted away from the rivalry stimuli. In addition, the competing image in one eye suppresses the target in the other eye through a pattern of gain changes similar to those induced by attention. These results require a revision of the current computational theories of binocular rivalry, in which the role of attention is ignored. Here, we provide a computational model of binocular rivalry. In the model, competition between two images in rivalry is driven by both attentional modulation and mutual inhibition, which have distinct selectivity (feature vs. eye of origin) and dynamics (relatively slow vs. relatively fast). The proposed model explains a wide range of phenomena reported in rivalry, including the three hallmarks: (i) binocular rivalry requires attention; (ii) various perceptual states emerge when the two images are swapped between the eyes multiple times per second; (iii) the dominance duration as a function of input strength follows Levelt’s propositions. With a bifurcation analysis, we identified the parameter space in which the model’s behavior was consistent with experimental results.This work was supported by NIH National Eye Institute Grants R01-EY019693 (to M.C. and D.J.H.) and R01-EY025673 (to D.J.H.). H.-H.L. was supported by NIH Grant R90DA043849. J. Rankin was supported by the Swartz Foundation

    Attractors, memory and perception

    Get PDF
    In this Thesis, the first three introductory chapters are devoted to the review of literature on contextual perception, its neural basis and network modeling of memory. In chapter 4, the first two sections give the definition of our model; and the next two sections, 4.3 and 4.4, report the original work of mine on retrieval properties of different network structures and network dynamics underlying the response to ambiguous patterns, respectively. The reported work in chapter 5 has been done in collaboration with Prof Bharathi Jagadeesh in University of Washington, and is already published in the journal \u201dCerebral Cortex\u201d. In this collaboration, Yan Liu, from the group in Seattle, carried out the recording experiments and I did the data analysis and network simulations. Chapter 6, which represents a network model for \u201dpriming\u201d and \u201dadaptation aftereffect\u201d is done by me. The works reported in 4.3, 4.5, and the whole chapter 6 are in preparation for publication

    The three-second 'subjective present': a critical review and a new proposal

    Get PDF
    It has been argued that there is a 'subjective present' or 'experienced moment' of about three seconds in duration, involving automatic binding of events into perceptual units on that time scale. Research on topics that have been taken as relevant to this proposal is reviewed. The topics include accuracy in reproduction of stimulus durations, synchronization of behaviour with a regular beat, mental rhythmization of a regular beat, time units in behaviour, segmentation of observed behaviour into meaningful units, time scale of reversals of perception with bistable ambiguous figures, time scale of inhibition of return in visual search, and EEG responses to deviant stimuli in series of repeating stimuli. Most of the research findings were not consistent with the three-second window hypothesis. The small amount of supportive evidence is better interpreted as effects of specific processing mechanisms, not as showing general temporal integration. The evidence shows that temporal integration occurs on multiple time scales and no particular duration is special, and that windows of temporal integration are defined in terms of information density, not in terms of duration. The subjective present is constructed through local temporal integration on multiple time scales, further integrated into a coherent global representation of what is going on

    The extended present: an informational context for perception

    Get PDF
    Several previous authors have proposed a kind of specious or subjective present moment that covers a few seconds of recent information. This article proposes a new hypothesis about the subjective present, renamed the extended present, defined not in terms of time covered but as a thematically connected information structure held in working memory and in transiently accessible form in long-term memory. The three key features of the extended present are that information in it is thematically connected, both internally and to current attended perceptual input, it is organised in a hierarchical structure, and all information in it is marked with temporal information, specifically ordinal and duration information. Temporal boundaries to the information structure are determined by hierarchical structure processing and by limits on processing and storage capacity. Supporting evidence for the importance of hierarchical structure analysis is found in the domains of music perception, speech and language processing, perception and production of goal-directed action, and exact arithmetical calculation. Temporal information marking is also discussed and a possible mechanism for representing ordinal and duration information on the time scale of the extended present is proposed. It is hypothesised that the extended present functions primarily as an informational context for making sense of current perceptual input, and as an enabler for perception and generation of complex structures and operations in language, action, music, exact calculation, and other domains

    27th Annual Computational Neuroscience Meeting (CNS*2018): Part One

    Get PDF

    Simulating Bistable Perception with Interrupted Ambiguous Stimulus using Self-Oscillator Dynamics with Percept Choice Bifurcation

    Get PDF
    A behavioral stochastic self-oscillator model (Fürstenau 2010, Biol. Cybern. 103 (3) 175-198) is used for simulating interrupted ambiguous stimulus induced percept reversals with periodic stimu-lus-off times toff between 10 ms and 1 s. Statistical evaluation of the simulated reversal time series predicts a maximum of the ambiguous stimulus percept reversal rate R at toff 200 ms. It explains the experimental results of Orbach et.al. (1966, Percept. Mot. Skills 22 615-618) who determined an average maximum of R 36 min-1 at toff 200 ms with an on-time ton = 300 ms, and similar results of Kornmeier et.al. (2007, Psychophysiology 44, 552-560). The macroscopic model is based on an inhibitorily coupled pair of three coupled nonlinear equations, one triplet for each percept. As expected from our previous work the perception, attention, and memory (PAM) dy-namics of a single triplet with feedback delay T = 40 ms and attention fatigue time-constant = 1 – 2 s is sufficient for reproducing the basic experimental findings. For quantitative agreement a stochastic Langevin-force term in the attention equation proves essential, in support of Braskamp et al. (2006, J. Vision 6 1244-1256). Assuming a dissipative thermal origin of noise the formal analysis with the Fluctuation-Dissipation Theorem leads to a quantification of cognitive inertia as predicted by Gao et.al (2006, Cogn. Process. 7 105-112). The model supports the interplay be-tween the percept choice (bifurcation) dynamics (Noest et.al. 2007, J. Vision 7 1-14) during stimu-lus onset and the adaptive gain (attention fatigue) driven quasiperiodic percept reversals (Ditzinger & Haken 1989, Biol. Cybern. 61 279-287)
    corecore