10 research outputs found

    Factorization of natural 4 × 4 patch distributions

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-540-30212-4_15Revised and Selected Papers of ECCV 2004 Workshop SMVP 2004, Prague, Czech Republic, May 16, 2004The lack of sufficient machine readable images makes impossible the direct computation of natural image 4 × 4 block statistics and one has to resort to indirect approximated methods to reduce their domain space. A natural approach to this is to collect statistics over compressed images; if the reconstruction quality is good enough, these statistics will be sufficiently representative. However, a requirement for easier statistics collection is that the method used provides a uniform representation of the compression information across all patches, something for which codebook techniques are well suited. We shall follow this approach here, using a fractal compression–inspired quantization scheme to approximate a given patch B by a triplet (D B , μ B , σ B ) with σ B the patch’s contrast, μ B its brightness and D B a codebook approximation to the mean–variance normalization (B – μ B )/σ B of B. The resulting reduction of the domain space makes feasible the computation of entropy and mutual information estimates that, in turn, suggest a factorization of the approximation of p(B) ≃ p(D B , μ B , σ B ) as p(D B , μ B , σ B ) ≃ p(D B )p(μ)p(σ)Φ(|| ∇ ||), with Φ being a high contrast correction.With partial support of Spain’s CICyT, TIC 01–57

    Representing Where along with What Information in a Model of a Cortical Patch

    Get PDF
    Behaving in the real world requires flexibly combining and maintaining information about both continuous and discrete variables. In the visual domain, several lines of evidence show that neurons in some cortical networks can simultaneously represent information about the position and identity of objects, and maintain this combined representation when the object is no longer present. The underlying network mechanism for this combined representation is, however, unknown. In this paper, we approach this issue through a theoretical analysis of recurrent networks. We present a model of a cortical network that can retrieve information about the identity of objects from incomplete transient cues, while simultaneously representing their spatial position. Our results show that two factors are important in making this possible: A) a metric organisation of the recurrent connections, and B) a spatially localised change in the linear gain of neurons. Metric connectivity enables a localised retrieval of information about object identity, while gain modulation ensures localisation in the correct position. Importantly, we find that the amount of information that the network can retrieve and retain about identity is strongly affected by the amount of information it maintains about position. This balance can be controlled by global signals that change the neuronal gain. These results show that anatomical and physiological properties, which have long been known to characterise cortical networks, naturally endow them with the ability to maintain a conjunctive representation of the identity and location of objects

    On the Local-Field Distribution in Attractor Neural Networks

    No full text
    In this paper a simple two-layer neural network's model, similar to that, studied by D.Amit and N.Brunel [11] , is investigated in the frames of the mean-field approximation. The distributions of the local fields are analytically derived and compared to those obtained in ref.[11]. The dynamic properties are discussed and the basin of attraction in some parametric space is found. A procedure for driving the system into a basin of attraction by using a regulation imposed on the network is proposed. The effect of outer stimulus is shown to have a destructive influence on the attractor, forcing the later to disappear if the distribution of the stimulus has high enough variance or if the stimulus has a spatial structure with sufficient contrast. The techniques, used in this paper, for obtaining the analytical results can be applied to more complex topologies of linked recurrent neural networks. Keywords: attractor neural networks, basins of attractors, local-fields, learnability, stability...

    Spatial asymmetric retrieval states in symmetric Hebb network with uniform connectivity

    No full text
    Consiglio Nazionale delle Ricerche - Biblioteca Centrale - P.le Aldo Moro, 7 , Rome / CNR - Consiglio Nazionale delle RichercheSIGLEITItal
    corecore