1,875 research outputs found

    Cortical spatio-temporal dimensionality reduction for visual grouping

    Full text link
    The visual systems of many mammals, including humans, is able to integrate the geometric information of visual stimuli and to perform cognitive tasks already at the first stages of the cortical processing. This is thought to be the result of a combination of mechanisms, which include feature extraction at single cell level and geometric processing by means of cells connectivity. We present a geometric model of such connectivities in the space of detected features associated to spatio-temporal visual stimuli, and show how they can be used to obtain low-level object segmentation. The main idea is that of defining a spectral clustering procedure with anisotropic affinities over datasets consisting of embeddings of the visual stimuli into higher dimensional spaces. Neural plausibility of the proposed arguments will be discussed

    Local and global gestalt laws: A neurally based spectral approach

    Get PDF
    A mathematical model of figure-ground articulation is presented, taking into account both local and global gestalt laws. The model is compatible with the functional architecture of the primary visual cortex (V1). Particularly the local gestalt law of good continuity is described by means of suitable connectivity kernels, that are derived from Lie group theory and are neurally implemented in long range connectivity in V1. Different kernels are compatible with the geometric structure of cortical connectivity and they are derived as the fundamental solutions of the Fokker Planck, the Sub-Riemannian Laplacian and the isotropic Laplacian equations. The kernels are used to construct matrices of connectivity among the features present in a visual stimulus. Global gestalt constraints are then introduced in terms of spectral analysis of the connectivity matrix, showing that this processing can be cortically implemented in V1 by mean field neural equations. This analysis performs grouping of local features and individuates perceptual units with the highest saliency. Numerical simulations are performed and results are obtained applying the technique to a number of stimuli.Comment: submitted to Neural Computatio

    A geometric model of multi-scale orientation preference maps via Gabor functions

    Full text link
    In this paper we present a new model for the generation of orientation preference maps in the primary visual cortex (V1), considering both orientation and scale features. First we undertake to model the functional architecture of V1 by interpreting it as a principal fiber bundle over the 2-dimensional retinal plane by introducing intrinsic variables orientation and scale. The intrinsic variables constitute a fiber on each point of the retinal plane and the set of receptive profiles of simple cells is located on the fiber. Each receptive profile on the fiber is mathematically interpreted as a rotated Gabor function derived from an uncertainty principle. The visual stimulus is lifted in a 4-dimensional space, characterized by coordinate variables, position, orientation and scale, through a linear filtering of the stimulus with Gabor functions. Orientation preference maps are then obtained by mapping the orientation value found from the lifting of a noise stimulus onto the 2-dimensional retinal plane. This corresponds to a Bargmann transform in the reducible representation of the SE(2)=R2Ă—S1\text{SE}(2)=\mathbb{R}^2\times S^1 group. A comparison will be provided with a previous model based on the Bargman transform in the irreducible representation of the SE(2)\text{SE}(2) group, outlining that the new model is more physiologically motivated. Then we present simulation results related to the construction of the orientation preference map by using Gabor filters with different scales and compare those results to the relevant neurophysiological findings in the literature

    The constitution of visual perceptual units in the functional architecture of V1

    Full text link
    Scope of this paper is to consider a mean field neural model which takes into account the functional neurogeometry of the visual cortex modelled as a group of rotations and translations. The model generalizes well known results of Bressloff and Cowan which, in absence of input, accounts for hallucination patterns. The main result of our study consists in showing that in presence of a visual input, the eigenmodes of the linearized operator which become stable represent perceptual units present in the image. The result is strictly related to dimensionality reduction and clustering problems

    Motion clouds: model-based stimulus synthesis of natural-like random textures for the study of motion perception

    Full text link
    Choosing an appropriate set of stimuli is essential to characterize the response of a sensory system to a particular functional dimension, such as the eye movement following the motion of a visual scene. Here, we describe a framework to generate random texture movies with controlled information content, i.e., Motion Clouds. These stimuli are defined using a generative model that is based on controlled experimental parametrization. We show that Motion Clouds correspond to dense mixing of localized moving gratings with random positions. Their global envelope is similar to natural-like stimulation with an approximate full-field translation corresponding to a retinal slip. We describe the construction of these stimuli mathematically and propose an open-source Python-based implementation. Examples of the use of this framework are shown. We also propose extensions to other modalities such as color vision, touch, and audition

    Bio-Inspired Computer Vision: Towards a Synergistic Approach of Artificial and Biological Vision

    Get PDF
    To appear in CVIUStudies in biological vision have always been a great source of inspiration for design of computer vision algorithms. In the past, several successful methods were designed with varying degrees of correspondence with biological vision studies, ranging from purely functional inspiration to methods that utilise models that were primarily developed for explaining biological observations. Even though it seems well recognised that computational models of biological vision can help in design of computer vision algorithms, it is a non-trivial exercise for a computer vision researcher to mine relevant information from biological vision literature as very few studies in biology are organised at a task level. In this paper we aim to bridge this gap by providing a computer vision task centric presentation of models primarily originating in biological vision studies. Not only do we revisit some of the main features of biological vision and discuss the foundations of existing computational studies modelling biological vision, but also we consider three classical computer vision tasks from a biological perspective: image sensing, segmentation and optical flow. Using this task-centric approach, we discuss well-known biological functional principles and compare them with approaches taken by computer vision. Based on this comparative analysis of computer and biological vision, we present some recent models in biological vision and highlight a few models that we think are promising for future investigations in computer vision. To this extent, this paper provides new insights and a starting point for investigators interested in the design of biology-based computer vision algorithms and pave a way for much needed interaction between the two communities leading to the development of synergistic models of artificial and biological vision

    From receptive profiles to a metric model of V1

    Full text link
    In this work we show how to construct connectivity kernels induced by the receptive profiles of simple cells of the primary visual cortex (V1). These kernels are directly defined by the shape of such profiles: this provides a metric model for the functional architecture of V1, whose global geometry is determined by the reciprocal interactions between local elements. Our construction adapts to any bank of filters chosen to represent a set of receptive profiles, since it does not require any structure on the parameterization of the family. The connectivity kernel that we define carries a geometrical structure consistent with the well-known properties of long-range horizontal connections in V1, and it is compatible with the perceptual rules synthesized by the concept of association field. These characteristics are still present when the kernel is constructed from a bank of filters arising from an unsupervised learning algorithm.Comment: 25 pages, 18 figures. Added acknowledgement

    Geometry and dimensionality reduction of feature spaces in primary visual cortex

    Full text link
    Some geometric properties of the wavelet analysis performed by visual neurons are discussed and compared with experimental data. In particular, several relationships between the cortical morphologies and the parametric dependencies of extracted features are formalized and considered from a harmonic analysis point of view

    Cortical Synchronization and Perceptual Framing

    Full text link
    How does the brain group together different parts of an object into a coherent visual object representation? Different parts of an object may be processed by the brain at different rates and may thus become desynchronized. Perceptual framing is a process that resynchronizes cortical activities corresponding to the same retinal object. A neural network model is presented that is able to rapidly resynchronize clesynchronized neural activities. The model provides a link between perceptual and brain data. Model properties quantitatively simulate perceptual framing data, including psychophysical data about temporal order judgments and the reduction of threshold contrast as a function of stimulus length. Such a model has earlier been used to explain data about illusory contour formation, texture segregation, shape-from-shading, 3-D vision, and cortical receptive fields. The model hereby shows how many data may be understood as manifestations of a cortical grouping process that can rapidly resynchronize image parts which belong together in visual object representations. The model exhibits better synchronization in the presence of noise than without noise, a type of stochastic resonance, and synchronizes robustly when cells that represent different stimulus orientations compete. These properties arise when fast long-range cooperation and slow short-range competition interact via nonlinear feedback interactions with cells that obey shunting equations.Office of Naval Research (N00014-92-J-1309, N00014-95-I-0409, N00014-95-I-0657, N00014-92-J-4015); Air Force Office of Scientific Research (F49620-92-J-0334, F49620-92-J-0225)
    • …
    corecore