4,009 research outputs found

    Slowness and Sparseness Lead to Place, Head-Direction, and Spatial-View Cells

    Get PDF
    We present a model for the self-organized formation of place cells, head-direction cells, and spatial-view cells in the hippocampal formation based on unsupervised learning on quasi-natural visual stimuli. The model comprises a hierarchy of Slow Feature Analysis (SFA) nodes, which were recently shown to reproduce many properties of complex cells in the early visual system. The system extracts a distributed grid-like representation of position and orientation, which is transcoded into a localized place-field, head-direction, or view representation, by sparse coding. The type of cells that develops depends solely on the relevant input statistics, i.e., the movement pattern of the simulated animal. The numerical simulations are complemented by a mathematical analysis that allows us to accurately predict the output of the top SFA laye

    Unsupervised machine learning for detection of phase transitions in off-lattice systems II. Applications

    Get PDF
    We outline how principal component analysis (PCA) can be applied to particle configuration data to detect a variety of phase transitions in off-lattice systems, both in and out of equilibrium. Specifically, we discuss its application to study 1) the nonequilibrium random organization (RandOrg) model that exhibits a phase transition from quiescent to steady-state behavior as a function of density, 2) orientationally and positionally driven equilibrium phase transitions for hard ellipses, and 3) compositionally driven demixing transitions in the non-additive binary Widom-Rowlinson mixture

    A Detail Based Method for Linear Full Reference Image Quality Prediction

    Full text link
    In this paper, a novel Full Reference method is proposed for image quality assessment, using the combination of two separate metrics to measure the perceptually distinct impact of detail losses and of spurious details. To this purpose, the gradient of the impaired image is locally decomposed as a predicted version of the original gradient, plus a gradient residual. It is assumed that the detail attenuation identifies the detail loss, whereas the gradient residuals describe the spurious details. It turns out that the perceptual impact of detail losses is roughly linear with the loss of the positional Fisher information, while the perceptual impact of the spurious details is roughly proportional to a logarithmic measure of the signal to residual ratio. The affine combination of these two metrics forms a new index strongly correlated with the empirical Differential Mean Opinion Score (DMOS) for a significant class of image impairments, as verified for three independent popular databases. The method allowed alignment and merging of DMOS data coming from these different databases to a common DMOS scale by affine transformations. Unexpectedly, the DMOS scale setting is possible by the analysis of a single image affected by additive noise.Comment: 15 pages, 9 figures. Copyright notice: The paper has been accepted for publication on the IEEE Trans. on Image Processing on 19/09/2017 and the copyright has been transferred to the IEE

    How Does Our Visual System Achieve Shift and Size Invariance?

    Get PDF
    The question of shift and size invariance in the primate visual system is discussed. After a short review of the relevant neurobiology and psychophysics, a more detailed analysis of computational models is given. The two main types of networks considered are the dynamic routing circuit model and invariant feature networks, such as the neocognitron. Some specific open questions in context of these models are raised and possible solutions discussed

    Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of Dimensionality: a Review

    Get PDF
    The paper characterizes classes of functions for which deep learning can be exponentially better than shallow learning. Deep convolutional networks are a special case of these conditions, though weight sharing is not the main reason for their exponential advantage

    Display symmetry affects positional specificity in same–different judgment of pairs of novel visual patterns

    Get PDF
    AbstractDeciding whether a novel visual pattern is the same as or different from a previously seen reference is easier if both stimuli are presented to the same rather than to different locations in the field of view (Foster & Kahn (1985). Biological Cybernetics, 51, 305–312; Dill & Fahle (1998). Perception and Psychophysics, 60, 65–81). We investigated whether pattern symmetry interacts with the effect of translation. Patterns were small dot-clouds which could be mirror-symmetric or asymmetric. Translations were displacements of the visual pattern symmetrically across the fovea, either left–right or above–below. We found that same–different discriminations were worse (less accurate and slower) for translated patterns, to an extent which in general was not influenced by pattern symmetry, or pattern orientation, or direction of displacement. However, if the displaced pattern was a mirror image of the original one (along the trajectory of the displacement), then performance was largely invariant to translation. Both positional specificity and its reduction in symmetric displays may be explained by location-specific pre-processing of the visual input
    • …
    corecore