3,779 research outputs found

    Hysteresis in human binocular fusion: temporalward and nasalward ranges

    Get PDF
    Fender and Julesz [J. Opt. Soc. Am. 57, 819 (1967)] moved pairs of retinally stabilized images across the temporalward visual fields and found significant differences between the disparities that elicited fusion and the disparities at which fusion was lost. They recognized this phenomenon as an example of hysteresis. In the work reported in this paper, binocular retinally stabilized images of vertical dark bars on white backgrounds were moved into horizontal disparity in both the nasalward and the temporalward directions. The limits of Panum's fusional area and the hysteresis demonstrated by these limits were measured for two observers. The following results were obtained: (1) the nasalward limits of Panum's fusional area and the hysteresis demonstrated by the nasalward limits do not differ significantly from the temporalward limits and the hysteresis demonstrated by the temporalward limits; (2) the limits of Panum's fusional area and the hysteresis demonstrated by these limits are not significantly different if one stimulus moves across each retina or if one stimulus is held still on one retina and the other stimulus is moved across the other retina; (3) the use of nonstabilized cross hairs for fixation decreases the hysteresis; and (4) the full hysteresis effect can be elicited with a rate of change of disparity of 2 arcmin/sec

    Traditional and new principles of perceptual grouping

    Get PDF
    Perceptual grouping refers to the process of determining which regions and parts of the visual scene belong together as parts of higher order perceptual units such as objects or patterns. In the early 20th century, Gestalt psychologists identified a set of classic grouping principles which specified how some image features lead to grouping between elements given that all other factors were held constant. Modern vision scientists have expanded this list to cover a wide range of image features but have also expanded the importance of learning and other non-image factors. Unlike early Gestalt accounts which were based largely on visual demonstrations, modern theories are often explicitly quantitative and involve detailed models of how various image features modulate grouping. Work has also been done to understand the rules by which different grouping principles integrate to form a final percept. This chapter gives an overview of the classic principles, modern developments in understanding them, and new principles and the evidence for them. There is also discussion of some of the larger theoretical issues about grouping such as at what stage of visual processing it occurs and what types of neural mechanisms may implement grouping principles

    Neural Dynamics of Motion Perception: Direction Fields, Apertures, and Resonant Grouping

    Full text link
    A neural network model of global motion segmentation by visual cortex is described. Called the Motion Boundary Contour System (BCS), the model clarifies how ambiguous local movements on a complex moving shape are actively reorganized into a coherent global motion signal. Unlike many previous researchers, we analyse how a coherent motion signal is imparted to all regions of a moving figure, not only to regions at which unambiguous motion signals exist. The model hereby suggests a solution to the global aperture problem. The Motion BCS describes how preprocessing of motion signals by a Motion Oriented Contrast Filter (MOC Filter) is joined to long-range cooperative grouping mechanisms in a Motion Cooperative-Competitive Loop (MOCC Loop) to control phenomena such as motion capture. The Motion BCS is computed in parallel with the Static BCS of Grossberg and Mingolla (1985a, 1985b, 1987). Homologous properties of the Motion BCS and the Static BCS, specialized to process movement directions and static orientations, respectively, support a unified explanation of many data about static form perception and motion form perception that have heretofore been unexplained or treated separately. Predictions about microscopic computational differences of the parallel cortical streams V1 --> MT and V1 --> V2 --> MT are made, notably the magnocellular thick stripe and parvocellular interstripe streams. It is shown how the Motion BCS can compute motion directions that may be synthesized from multiple orientations with opposite directions-of-contrast. Interactions of model simple cells, complex cells, hypercomplex cells, and bipole cells are described, with special emphasis given to new functional roles in direction disambiguation for endstopping at multiple processing stages and to the dynamic interplay of spatially short-range and long-range interactions.Air Force Office of Scientific Research (90-0175); Defense Advanced Research Projects Agency (90-0083); Office of Naval Research (N00014-91-J-4100

    Asynchronous spiking neurons, the natural key to exploit temporal sparsity

    Get PDF
    Inference of Deep Neural Networks for stream signal (Video/Audio) processing in edge devices is still challenging. Unlike the most state of the art inference engines which are efficient for static signals, our brain is optimized for real-time dynamic signal processing. We believe one important feature of the brain (asynchronous state-full processing) is the key to its excellence in this domain. In this work, we show how asynchronous processing with state-full neurons allows exploitation of the existing sparsity in natural signals. This paper explains three different types of sparsity and proposes an inference algorithm which exploits all types of sparsities in the execution of already trained networks. Our experiments in three different applications (Handwritten digit recognition, Autonomous Steering and Hand-Gesture recognition) show that this model of inference reduces the number of required operations for sparse input data by a factor of one to two orders of magnitudes. Additionally, due to fully asynchronous processing this type of inference can be run on fully distributed and scalable neuromorphic hardware platforms

    Rhythmic inhibition allows neural networks to search for maximally consistent states

    Full text link
    Gamma-band rhythmic inhibition is a ubiquitous phenomenon in neural circuits yet its computational role still remains elusive. We show that a model of Gamma-band rhythmic inhibition allows networks of coupled cortical circuit motifs to search for network configurations that best reconcile external inputs with an internal consistency model encoded in the network connectivity. We show that Hebbian plasticity allows the networks to learn the consistency model by example. The search dynamics driven by rhythmic inhibition enable the described networks to solve difficult constraint satisfaction problems without making assumptions about the form of stochastic fluctuations in the network. We show that the search dynamics are well approximated by a stochastic sampling process. We use the described networks to reproduce perceptual multi-stability phenomena with switching times that are a good match to experimental data and show that they provide a general neural framework which can be used to model other 'perceptual inference' phenomena

    Streaming Video QoE Modeling and Prediction: A Long Short-Term Memory Approach

    Get PDF
    HTTP based adaptive video streaming has become a popular choice of streaming due to the reliable transmission and the flexibility offered to adapt to varying network conditions. However, due to rate adaptation in adaptive streaming, the quality of the videos at the client keeps varying with time depending on the end-to-end network conditions. Further, varying network conditions can lead to the video client running out of playback content resulting in rebuffering events. These factors affect the user satisfaction and cause degradation of the user quality of experience (QoE). It is important to quantify the perceptual QoE of the streaming video users and monitor the same in a continuous manner so that the QoE degradation can be minimized. However, the continuous evaluation of QoE is challenging as it is determined by complex dynamic interactions among the QoE influencing factors. Towards this end, we present LSTM-QoE, a recurrent neural network based QoE prediction model using a Long Short-Term Memory (LSTM) network. The LSTM-QoE is a network of cascaded LSTM blocks to capture the nonlinearities and the complex temporal dependencies involved in the time varying QoE. Based on an evaluation over several publicly available continuous QoE databases, we demonstrate that the LSTM-QoE has the capability to model the QoE dynamics effectively. We compare the proposed model with the state-of-the-art QoE prediction models and show that it provides superior performance across these databases. Further, we discuss the state space perspective for the LSTM-QoE and show the efficacy of the state space modeling approaches for QoE prediction

    The complexity of dynamics in small neural circuits

    Full text link
    Mean-field theory is a powerful tool for studying large neural networks. However, when the system is composed of a few neurons, macroscopic differences between the mean-field approximation and the real behavior of the network can arise. Here we introduce a study of the dynamics of a small firing-rate network with excitatory and inhibitory populations, in terms of local and global bifurcations of the neural activity. Our approach is analytically tractable in many respects, and sheds new light on the finite-size effects of the system. In particular, we focus on the formation of multiple branching solutions of the neural equations through spontaneous symmetry-breaking, since this phenomenon increases considerably the complexity of the dynamical behavior of the network. For these reasons, branching points may reveal important mechanisms through which neurons interact and process information, which are not accounted for by the mean-field approximation.Comment: 34 pages, 11 figures. Supplementary materials added, colors of figures 8 and 9 fixed, results unchange

    Consciousness as inference in time : a commentary on Victor Lamme

    Get PDF
    Unraveling the neural correlates of conscious remains one of the great challenges of our time. Victor Lamme proposes that neural integration through feedback loops is what differentiates conscious from unconscious processing. Here, I review his hypothesis, focusing on the spatial scale of integration as well as the possible neural mechanisms involved. I go on to show that any theory of the neural correlates of consciousness is incomplete if it cannot account for how prior knowledge shapes perception and how this form of integration occurs. Finally, I propose that integration across moments in time is a crucial but hitherto neglected aspect of conscious perception, which creates the “flow” of conscious experience
    corecore