5,277 research outputs found

    Response of an Excitatory-Inhibitory Neural Network to External Stimulation: An Application to Image Segmentation

    Full text link
    Neural network models comprising elements which have exclusively excitatory or inhibitory synapses are capable of a wide range of dynamic behavior, including chaos. In this paper, a simple excitatory-inhibitory neural pair, which forms the building block of larger networks, is subjected to external stimulation. The response shows transition between various types of dynamics, depending upon the magnitude of the stimulus. Coupling such pairs over a local neighborhood in a two-dimensional plane, the resultant network can achieve a satisfactory segmentation of an image into ``object'' and ``background''. Results for synthetic and and ``real-life'' images are given.Comment: 8 pages, latex, 5 figure

    Synchronized Oscillations During Cooperative Feature Linking in a Cortical Model of Visual Perception

    Full text link
    A neural network model of synchronized oscillator activity in visual cortex is presented in order to account for recent neurophysiological findings that such synchronization may reflect global properties of the stimulus. In these recent experiments, it was reported that synchronization of oscillatory firing responses to moving bar stimuli occurred not only for nearby neurons, but also occurred between neurons separated by several cortical columns (several mm of cortex) when these neurons shared some receptive field preferences specific to the stimuli. These results were obtained not only for single bar stimuli but also across two disconnected, but colinear, bars moving in the same direction. Our model and computer simulations obtain these synchrony results across both single and double bar stimuli. For the double bar case, synchronous oscillations are induced in the region between the bars, but no oscillations are induced in the regions beyond the stimuli. These results were achieved with cellular units that exhibit limit cycle oscillations for a robust range of input values, but which approach an equilibrium state when undriven. Single and double bar synchronization of these oscillators was achieved by different, but formally related, models of preattentive visual boundary segmentation and attentive visual object recognition, as well as nearest-neighbor and randomly coupled models. In preattentive visual segmentation, synchronous oscillations may reflect the binding of local feature detectors into a globally coherent grouping. In object recognition, synchronous oscillations may occur during an attentive resonant state that triggers new learning. These modelling results support earlier theoretical predictions of synchronous visual cortical oscillations and demonstrate the robustness of the mechanisms capable of generating synchrony.Air Force Office of Scientific Research (90-0175); Army Research Office (DAAL-03-88-K0088); Defense Advanced Research Projects Agency (90-0083); National Aeronautics and Space Administration (NGT-50497

    Cortical Synchronization and Perceptual Framing

    Full text link
    How does the brain group together different parts of an object into a coherent visual object representation? Different parts of an object may be processed by the brain at different rates and may thus become desynchronized. Perceptual framing is a process that resynchronizes cortical activities corresponding to the same retinal object. A neural network model is presented that is able to rapidly resynchronize clesynchronized neural activities. The model provides a link between perceptual and brain data. Model properties quantitatively simulate perceptual framing data, including psychophysical data about temporal order judgments and the reduction of threshold contrast as a function of stimulus length. Such a model has earlier been used to explain data about illusory contour formation, texture segregation, shape-from-shading, 3-D vision, and cortical receptive fields. The model hereby shows how many data may be understood as manifestations of a cortical grouping process that can rapidly resynchronize image parts which belong together in visual object representations. The model exhibits better synchronization in the presence of noise than without noise, a type of stochastic resonance, and synchronizes robustly when cells that represent different stimulus orientations compete. These properties arise when fast long-range cooperation and slow short-range competition interact via nonlinear feedback interactions with cells that obey shunting equations.Office of Naval Research (N00014-92-J-1309, N00014-95-I-0409, N00014-95-I-0657, N00014-92-J-4015); Air Force Office of Scientific Research (F49620-92-J-0334, F49620-92-J-0225)

    Computing with arrays of coupled oscillators: An application to preattentive texture discrimination

    Get PDF
    Recent experimental findings (Gray et al. 1989; Eckhorn et al. 1988) seem to indicate that rapid oscillations and phase-lockings of different populations of cortical neurons play an important role in neural computations. In particular, global stimulus properties could be reflected in the correlated firing of spatially distant cells. Here we describe how simple coupled oscillator networks can be used to model the data and to investigate whether useful tasks can be performed by oscillator architectures. A specific demonstration is given for the problem of preattentive texture discrimination. Texture images are convolved with different sets of Gabor filters feeding into several corresponding arrays of coupled oscillators. After a brief transient, the dynamic evolution in the arrays leads to a separation of the textures by a phase labeling mechanism. The importance of noise and of long range connections is briefly discussed

    Neural Expectation Maximization

    Full text link
    Many real world tasks such as reasoning and physical interaction require identification and manipulation of conceptual entities. A first step towards solving these tasks is the automated discovery of distributed symbol-like representations. In this paper, we explicitly formalize this problem as inference in a spatial mixture model where each component is parametrized by a neural network. Based on the Expectation Maximization framework we then derive a differentiable clustering method that simultaneously learns how to group and represent individual entities. We evaluate our method on the (sequential) perceptual grouping task and find that it is able to accurately recover the constituent objects. We demonstrate that the learned representations are useful for next-step prediction.Comment: Accepted to NIPS 201

    Synchronized Oscillations During Cooperative Feature Lining in a Cortical Model of Visual Perception

    Full text link
    A neural network model of synchronized oscillations in visual cortex is presented to account for recent neurophysiological findings that such synchronization may reflect global properties of the stimulus. In these experiments, synchronization of oscillatory firing responses to moving bar stimuli occurred not only for nearby neurons, but also occurred between neurons separated by several cortical columns (several mm of cortex) when these neurons shared some receptive field preferences specific to the stimuli. These results were obtained for single bar stimuli and also across two disconnected, but colinear, bars moving in the same direction. Our model and computer simulations obtain these synchrony results across both single and double bar stimuli using different, but formally related, models of preattentive visual boundary segmentation and attentive visual object recognition, as well as nearest-neighbor and randomly coupled models.Air Force Office of Scientific Research (90-0175); Army Research Office (DAAL-03-88-K0088); Defense Advanced Research Projects Agency (90-0083); National Aeronautics and Space Administration (NGT-50497

    High frequency oscillations as a correlate of visual perception

    Get PDF
    “NOTICE: this is the author’s version of a work that was accepted for publication in International journal of psychophysiology. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in International journal of psychophysiology , 79, 1, (2011) DOI 10.1016/j.ijpsycho.2010.07.004Peer reviewedPostprin

    A toolbox for animal call recognition

    Get PDF
    Monitoring the natural environment is increasingly important as habit degradation and climate change reduce theworld’s biodiversity.We have developed software tools and applications to assist ecologists with the collection and analysis of acoustic data at large spatial and temporal scales.One of our key objectives is automated animal call recognition, and our approach has three novel attributes. First, we work with raw environmental audio, contaminated by noise and artefacts and containing calls that vary greatly in volume depending on the animal’s proximity to the microphone. Second, initial experimentation suggested that no single recognizer could dealwith the enormous variety of calls. Therefore, we developed a toolbox of generic recognizers to extract invariant features for each call type. Third, many species are cryptic and offer little data with which to train a recognizer. Many popular machine learning methods require large volumes of training and validation data and considerable time and expertise to prepare. Consequently we adopt bootstrap techniques that can be initiated with little data and refined subsequently. In this paper, we describe our recognition tools and present results for real ecological problems
    corecore