285 research outputs found

    Computational Neural Models of Spatial Integration in Perceptual Grouping

    Full text link
    Recent developments in the neural computational modeling of perceptual grouping are described with reference to a newly proposed taxonomy to formalize mechanisms of spatial integration. This notational framework and nomenclature is introduced in or-der to clarify key properties common to all or most models, while permitting unique attributes of each approach to be independently examined. The strength of spatial integration in the models that are considered is always some function of the distances and relative alignments in perceptual space of the centers of units representing orien-tational features or energy in a visual scene. We discuss the signicance of variations of the constituents of an activation function for spatial integration, and also consider the larger modeling framework in which this function is applied in each approach. We also discuss the relationship of feedforward and feedback mechanisms and the issues of self-organization as core principles underlying the establishment of spatial integra-tion mechanisms. The relationship of the grouping models to models of other visual competencies is considered with respect to prospects for future research. 354 From Fragments to Object

    Effective influences in neuronal networks : attentional modulation of effective influences underlying flexible processing and how to measure them

    Get PDF
    Selective routing of information between brain areas is a key prerequisite for flexible adaptive behaviour. It allows to focus on relevant information and to ignore potentially distracting influences. Selective attention is a psychological process which controls this preferential processing of relevant information. The neuronal network structures and dynamics, and the attentional mechanisms by which this routing is enabled are not fully clarified. Based on previous experimental findings and theories, a network model is proposed which reproduces a range of results from the attention literature. It depends on shifting of phase relations between oscillating neuronal populations to modulate the effective influence of synapses. This network model might serve as a generic routing motif throughout the brain. The attentional modifications of activity in this network are investigated experimentally and found to employ two distinct channels to influence processing: facilitation of relevant information and independent suppression of distracting information. These findings are in agreement with the model and previously unreported on the level of neuronal populations. Furthermore, effective influence in dynamical systems is investigated more closely. Due to a lack of a theoretical underpinning for measurements of influence in non-linear dynamical systems such as neuronal networks, often unsuited measures are used for experimental data that can lead to erroneous conclusions. Based on a central theorem in dynamical systems, a novel theory of effective influence is developed. Measures derived from this theory are demonstrated to capture the time dependent effective influence and the asymmetry of influences in model systems and experimental data. This new theory holds the potential to uncover previously concealed interactions in generic non-linear systems studied in a range of disciplines, such as neuroscience, ecology, economy and climatology

    Effective influences in neuronal networks : attentional modulation of effective influences underlying flexible processing and how to measure them

    Get PDF
    Selective routing of information between brain areas is a key prerequisite for flexible adaptive behaviour. It allows to focus on relevant information and to ignore potentially distracting influences. Selective attention is a psychological process which controls this preferential processing of relevant information. The neuronal network structures and dynamics, and the attentional mechanisms by which this routing is enabled are not fully clarified. Based on previous experimental findings and theories, a network model is proposed which reproduces a range of results from the attention literature. It depends on shifting of phase relations between oscillating neuronal populations to modulate the effective influence of synapses. This network model might serve as a generic routing motif throughout the brain. The attentional modifications of activity in this network are investigated experimentally and found to employ two distinct channels to influence processing: facilitation of relevant information and independent suppression of distracting information. These findings are in agreement with the model and previously unreported on the level of neuronal populations. Furthermore, effective influence in dynamical systems is investigated more closely. Due to a lack of a theoretical underpinning for measurements of influence in non-linear dynamical systems such as neuronal networks, often unsuited measures are used for experimental data that can lead to erroneous conclusions. Based on a central theorem in dynamical systems, a novel theory of effective influence is developed. Measures derived from this theory are demonstrated to capture the time dependent effective influence and the asymmetry of influences in model systems and experimental data. This new theory holds the potential to uncover previously concealed interactions in generic non-linear systems studied in a range of disciplines, such as neuroscience, ecology, economy and climatology

    Recurrent Spiking Neural Network Learning Based on a Competitive Maximization of Neuronal Activity

    Get PDF
    Spiking neural networks (SNNs) are believed to be highly computationally and energy efficient for specific neurochip hardware real-time solutions. However, there is a lack of learning algorithms for complex SNNs with recurrent connections, comparable in efficiency with back-propagation techniques and capable of unsupervised training. Here we suppose that each neuron in a biological neural network tends to maximize its activity in competition with other neurons, and put this principle at the basis of a new SNN learning algorithm. In such a way, a spiking network with the learned feed-forward, reciprocal and intralayer inhibitory connections, is introduced to the MNIST database digit recognition. It has been demonstrated that this SNN can be trained without a teacher, after a short supervised initialization of weights by the same algorithm. Also, it has been shown that neurons are grouped into families of hierarchical structures, corresponding to different digit classes and their associations. This property is expected to be useful to reduce the number of layers in deep neural networks and modeling the formation of various functional structures in a biological nervous system. Comparison of the learning properties of the suggested algorithm, with those of the Sparse Distributed Representation approach shows similarity in coding but also some advantages of the former. The basic principle of the proposed algorithm is believed to be practically applicable to the construction of much more complicated and diverse task solving SNNs. We refer to this new approach as “Family-Engaged Execution and Learning of Induced Neuron Groups”, or FEELING
    • …
    corecore