38,394 research outputs found

    A geometric model of multi-scale orientation preference maps via Gabor functions

    Full text link
    In this paper we present a new model for the generation of orientation preference maps in the primary visual cortex (V1), considering both orientation and scale features. First we undertake to model the functional architecture of V1 by interpreting it as a principal fiber bundle over the 2-dimensional retinal plane by introducing intrinsic variables orientation and scale. The intrinsic variables constitute a fiber on each point of the retinal plane and the set of receptive profiles of simple cells is located on the fiber. Each receptive profile on the fiber is mathematically interpreted as a rotated Gabor function derived from an uncertainty principle. The visual stimulus is lifted in a 4-dimensional space, characterized by coordinate variables, position, orientation and scale, through a linear filtering of the stimulus with Gabor functions. Orientation preference maps are then obtained by mapping the orientation value found from the lifting of a noise stimulus onto the 2-dimensional retinal plane. This corresponds to a Bargmann transform in the reducible representation of the SE(2)=R2×S1\text{SE}(2)=\mathbb{R}^2\times S^1 group. A comparison will be provided with a previous model based on the Bargman transform in the irreducible representation of the SE(2)\text{SE}(2) group, outlining that the new model is more physiologically motivated. Then we present simulation results related to the construction of the orientation preference map by using Gabor filters with different scales and compare those results to the relevant neurophysiological findings in the literature

    Motion clouds: model-based stimulus synthesis of natural-like random textures for the study of motion perception

    Full text link
    Choosing an appropriate set of stimuli is essential to characterize the response of a sensory system to a particular functional dimension, such as the eye movement following the motion of a visual scene. Here, we describe a framework to generate random texture movies with controlled information content, i.e., Motion Clouds. These stimuli are defined using a generative model that is based on controlled experimental parametrization. We show that Motion Clouds correspond to dense mixing of localized moving gratings with random positions. Their global envelope is similar to natural-like stimulation with an approximate full-field translation corresponding to a retinal slip. We describe the construction of these stimuli mathematically and propose an open-source Python-based implementation. Examples of the use of this framework are shown. We also propose extensions to other modalities such as color vision, touch, and audition

    Contextual modulation of primary visual cortex by auditory signals

    Get PDF
    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’
    • …
    corecore