2,417 research outputs found

    Adaptive Resonance Theory

    Full text link

    Understanding visual map formation through vortex dynamics of spin Hamiltonian models

    Full text link
    The pattern formation in orientation and ocular dominance columns is one of the most investigated problems in the brain. From a known cortical structure, we build spin-like Hamiltonian models with long-range interactions of the Mexican hat type. These Hamiltonian models allow a coherent interpretation of the diverse phenomena in the visual map formation with the help of relaxation dynamics of spin systems. In particular, we explain various phenomena of self-organization in orientation and ocular dominance map formation including the pinwheel annihilation and its dependency on the columnar wave vector and boundary conditions.Comment: 4 pages, 15 figure

    Non-negative matrix factorization with sparseness constraints

    Full text link
    Non-negative matrix factorization (NMF) is a recently developed technique for finding parts-based, linear representations of non-negative data. Although it has successfully been applied in several applications, it does not always result in parts-based representations. In this paper, we show how explicitly incorporating the notion of `sparseness' improves the found decompositions. Additionally, we provide complete MATLAB code both for standard NMF and for our extension. Our hope is that this will further the application of these methods to solving novel data-analysis problems

    Activity-dependent bidirectional plasticity and homeostasis regulation governing structure formation in a model of layered visual memory

    Get PDF
    Poster presentation: Our work deals with the self-organization [1] of a memory structure that includes multiple hierarchical levels with massive recurrent communication within and between them. Such structure has to provide a representational basis for the relevant objects to be stored and recalled in a rapid and efficient way. Assuming that the object patterns consist of many spatially distributed local features, a problem of parts-based learning is posed. We speculate on the neural mechanisms governing the process of the structure formation and demonstrate their functionality on the task of human face recognition. The model we propose is based on two consecutive layers of distributed cortical modules, which in turn contain subunits receiving common afferents and bounded by common lateral inhibition (Figure 1). In the initial state, the connectivity between and within the layers is homogeneous, all types of synapses – bottom-up, lateral and top-down – being plastic. During the iterative learning, the lower layer of the system is exposed to the Gabor filter banks extracted from local points on the face images. Facing an unsupervised learning problem, the system is able to develop synaptic structure capturing local features and their relations on the lower level, as well as the global identity of the person at the higher level of processing, improving gradually its recognition performance with learning time. ..

    A global decision-making model via synchronization in macrocolumn units

    Get PDF
    Poster presentation: Introduction We here address the problem of integrating information about multiple objects and their positions on the visual scene. A primate visual system has little difficulty in rapidly achieving integration, given only a few objects. Unfortunately, computer vision still has great difficultly achieving comparable performance. It has been hypothesized that temporal binding or temporal separation could serve as a crucial mechanism to deal with information about objects and their positions in parallel to each other. Elaborating on this idea, we propose a neurally plausible mechanism for reaching local decision-making for "what" and "where" information to the global multi-object recognition. ..

    Dendritic inhibition enhances neural coding properties.

    Get PDF
    The presence of a large number of inhibitory contacts at the soma and axon initial segment of cortical pyramidal cells has inspired a large and influential class of neural network model which use post-integration lateral inhibition as a mechanism for competition between nodes. However, inhibitory synapses also target the dendrites of pyramidal cells. The role of this dendritic inhibition in competition between neurons has not previously been addressed. We demonstrate, using a simple computational model, that such pre-integration lateral inhibition provides networks of neurons with useful representational and computational properties which are not provided by post-integration inhibition
    corecore