314 research outputs found

    Attention model of binocular rivalry

    Get PDF
    This is the final version of the article. Available from National Academy of Sciences from the DOI in this record.When the corresponding retinal locations in the two eyes are presented with incompatible images, a stable percept gives way to perceptual alternations in which the two images compete for perceptual dominance. As perceptual experience evolves dynamically under constant external inputs, binocular rivalry has been used for studying intrinsic cortical computations and for understanding how the brain regulates competing inputs. Converging behavioral and EEG results have shown that binocular rivalry and attention are intertwined: binocular rivalry ceases when attention is diverted away from the rivalry stimuli. In addition, the competing image in one eye suppresses the target in the other eye through a pattern of gain changes similar to those induced by attention. These results require a revision of the current computational theories of binocular rivalry, in which the role of attention is ignored. Here, we provide a computational model of binocular rivalry. In the model, competition between two images in rivalry is driven by both attentional modulation and mutual inhibition, which have distinct selectivity (feature vs. eye of origin) and dynamics (relatively slow vs. relatively fast). The proposed model explains a wide range of phenomena reported in rivalry, including the three hallmarks: (i) binocular rivalry requires attention; (ii) various perceptual states emerge when the two images are swapped between the eyes multiple times per second; (iii) the dominance duration as a function of input strength follows Levelt’s propositions. With a bifurcation analysis, we identified the parameter space in which the model’s behavior was consistent with experimental results.This work was supported by NIH National Eye Institute Grants R01-EY019693 (to M.C. and D.J.H.) and R01-EY025673 (to D.J.H.). H.-H.L. was supported by NIH Grant R90DA043849. J. Rankin was supported by the Swartz Foundation

    Learning to Extract Motion from Videos in Convolutional Neural Networks

    Full text link
    This paper shows how to extract dense optical flow from videos with a convolutional neural network (CNN). The proposed model constitutes a potential building block for deeper architectures to allow using motion without resorting to an external algorithm, \eg for recognition in videos. We derive our network architecture from signal processing principles to provide desired invariances to image contrast, phase and texture. We constrain weights within the network to enforce strict rotation invariance and substantially reduce the number of parameters to learn. We demonstrate end-to-end training on only 8 sequences of the Middlebury dataset, orders of magnitude less than competing CNN-based motion estimation methods, and obtain comparable performance to classical methods on the Middlebury benchmark. Importantly, our method outputs a distributed representation of motion that allows representing multiple, transparent motions, and dynamic textures. Our contributions on network design and rotation invariance offer insights nonspecific to motion estimation

    Provably scale-covariant networks from oriented quasi quadrature measures in cascade

    Full text link
    This article presents a continuous model for hierarchical networks based on a combination of mathematically derived models of receptive fields and biologically inspired computations. Based on a functional model of complex cells in terms of an oriented quasi quadrature combination of first- and second-order directional Gaussian derivatives, we couple such primitive computations in cascade over combinatorial expansions over image orientations. Scale-space properties of the computational primitives are analysed and it is shown that the resulting representation allows for provable scale and rotation covariance. A prototype application to texture analysis is developed and it is demonstrated that a simplified mean-reduced representation of the resulting QuasiQuadNet leads to promising experimental results on three texture datasets.Comment: 12 pages, 3 figures, 1 tabl

    From uncertainty to reward: BOLD characteristics differentiate signaling pathways

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Reward value and uncertainty are represented by dopamine neurons in monkeys by distinct phasic and tonic firing rates. Knowledge about the underlying differential dopaminergic pathways is crucial for a better understanding of dopamine-related processes. Using functional magnetic resonance blood-oxygen level dependent (BOLD) imaging we analyzed brain activation in 15 healthy, male subjects performing a gambling task, upon expectation of potential monetary rewards at different reward values and levels of uncertainty.</p> <p>Results</p> <p>Consistent with previous studies, ventral striatal activation was related to both reward magnitudes and values. Activation in medial and lateral orbitofrontal brain areas was best predicted by reward uncertainty. Moreover, late BOLD responses relative to trial onset were due to expectation of different reward values and likely to represent phasic dopaminergic signaling. Early BOLD responses were due to different levels of reward uncertainty and likely to represent tonic dopaminergic signals.</p> <p>Conclusions</p> <p>We conclude that differential dopaminergic signaling as revealed in animal studies is not only represented locally by involvement of distinct brain regions but also by distinct BOLD signal characteristics.</p

    A Normalization Model of Attentional Modulation of Single Unit Responses

    Get PDF
    Although many studies have shown that attention to a stimulus can enhance the responses of individual cortical sensory neurons, little is known about how attention accomplishes this change in response. Here, we propose that attention-based changes in neuronal responses depend on the same response normalization mechanism that adjusts sensory responses whenever multiple stimuli are present. We have implemented a model of attention that assumes that attention works only through this normalization mechanism, and show that it can replicate key effects of attention. The model successfully explains how attention changes the gain of responses to individual stimuli and also why modulation by attention is more robust and not a simple gain change when multiple stimuli are present inside a neuron's receptive field. Additionally, the model accounts well for physiological data that measure separately attentional modulation and sensory normalization of the responses of individual neurons in area MT in visual cortex. The proposal that attention works through a normalization mechanism sheds new light a broad range of observations on how attention alters the representation of sensory information in cerebral cortex

    Neurogenesis Drives Stimulus Decorrelation in a Model of the Olfactory Bulb

    Get PDF
    The reshaping and decorrelation of similar activity patterns by neuronal networks can enhance their discriminability, storage, and retrieval. How can such networks learn to decorrelate new complex patterns, as they arise in the olfactory system? Using a computational network model for the dominant neural populations of the olfactory bulb we show that fundamental aspects of the adult neurogenesis observed in the olfactory bulb -- the persistent addition of new inhibitory granule cells to the network, their activity-dependent survival, and the reciprocal character of their synapses with the principal mitral cells -- are sufficient to restructure the network and to alter its encoding of odor stimuli adaptively so as to reduce the correlations between the bulbar representations of similar stimuli. The decorrelation is quite robust with respect to various types of perturbations of the reciprocity. The model parsimoniously captures the experimentally observed role of neurogenesis in perceptual learning and the enhanced response of young granule cells to novel stimuli. Moreover, it makes specific predictions for the type of odor enrichment that should be effective in enhancing the ability of animals to discriminate similar odor mixtures

    Structure of hadron resonances with a nearby zero of the amplitude

    Get PDF
    We discuss the relation between the analytic structure of the scattering amplitude and the origin of an eigenstate represented by a pole of the amplitude.If the eigenstate is not dynamically generated by the interaction in the channel of interest, the residue of the pole vanishes in the zero coupling limit. Based on the topological nature of the phase of the scattering amplitude, we show that the pole must encounter with the Castillejo-Dalitz-Dyson (CDD) zero in this limit. It is concluded that the dynamical component of the eigenstate is small if a CDD zero exists near the eigenstate pole. We show that the line shape of the resonance is distorted from the Breit-Wigner form as an observable consequence of the nearby CDD zero. Finally, studying the positions of poles and CDD zeros of the KbarN-piSigma amplitude, we discuss the origin of the eigenstates in the Lambda(1405) region.Comment: 7 pages, 3 figures, v2: published versio

    Second Order Dimensionality Reduction Using Minimum and Maximum Mutual Information Models

    Get PDF
    Conventional methods used to characterize multidimensional neural feature selectivity, such as spike-triggered covariance (STC) or maximally informative dimensions (MID), are limited to Gaussian stimuli or are only able to identify a small number of features due to the curse of dimensionality. To overcome these issues, we propose two new dimensionality reduction methods that use minimum and maximum information models. These methods are information theoretic extensions of STC that can be used with non-Gaussian stimulus distributions to find relevant linear subspaces of arbitrary dimensionality. We compare these new methods to the conventional methods in two ways: with biologically-inspired simulated neurons responding to natural images and with recordings from macaque retinal and thalamic cells responding to naturalistic time-varying stimuli. With non-Gaussian stimuli, the minimum and maximum information methods significantly outperform STC in all cases, whereas MID performs best in the regime of low dimensional feature spaces
    • …
    corecore