244 research outputs found

    Visual motion processing and human tracking behavior

    Full text link
    The accurate visual tracking of a moving object is a human fundamental skill that allows to reduce the relative slip and instability of the object's image on the retina, thus granting a stable, high-quality vision. In order to optimize tracking performance across time, a quick estimate of the object's global motion properties needs to be fed to the oculomotor system and dynamically updated. Concurrently, performance can be greatly improved in terms of latency and accuracy by taking into account predictive cues, especially under variable conditions of visibility and in presence of ambiguous retinal information. Here, we review several recent studies focusing on the integration of retinal and extra-retinal information for the control of human smooth pursuit.By dynamically probing the tracking performance with well established paradigms in the visual perception and oculomotor literature we provide the basis to test theoretical hypotheses within the framework of dynamic probabilistic inference. We will in particular present the applications of these results in light of state-of-the-art computer vision algorithms

    Motion integration modulated by form information

    Get PDF
    ISBN : 978-2-9532965-0-1We propose a model of motion integration modulated by form information, inspired by neurobiological data. Our dynamical system models several key features of the motion processing stream in primate visual cortex. Thanks to a multi-layer architecture incorporating both feedforward-feedback and inhibitive lateral connections, our model is able to solve local motion ambiguities. One important feature of our model is to propose an anitropic integration of motion based on the form information. Our model can be implemented efficiently on GPU and we show its properties on classical psychophysical examples. First, a simple read-out allows us to reproduce the dynamics of ocular following for a moving bar stimulus. Second, we show how our models able to discriminate between extrinsic and intrinsic junctions present in the chopstick illusion. Finally, we show some promising results on real videos

    Modelling the dynamics of motion integration with a new luminance-gated diffusion mechanism

    Get PDF
    The dynamics of motion integration show striking similarities when observed at neuronal, psychophysical, and oculomotor levels. Based on the inter-relation and complementary insights given by those dynamics, our goal was to test how basic mechanisms of dynamical cortical processing can be incorporated in a dynamical model to solve several aspects of 2D motion integration and segmentation. Our model is inspired by the hierarchical processing stages of the primate visual cortex: we describe the interactions between several layers processing local motion and form information through feedforward, feedback, and inhibitive lateral connections. Also, following perceptual studies concerning contour integration and physiological studies of receptive fields, we postulate that motion estimation takes advantage of another low level cue, which is luminance smoothness along edges or surfaces, in order to gate recurrent motion diffusion. With such a model, we successfully reproduced the temporal dynamics of motion integration on a wide range of simple motion stimuli: line segments, rotating ellipses, plaids, and barber poles. Furthermore, we showed that the proposed computational rule of luminance-gated diffusion of motion information is sufficient to explain a large set of contextual modulations of motion integration and segmentation in more elaborated stimuli such as chopstick illusions, simulated aperture problems, or rotating diamonds. As a whole, in this paper we proposed a new basal luminance-driven motion integration mechanism as an alternative to less parsimonious models, we carefully investigated the dynamics of motion integration, and we established a distinction between simple and complex stimuli according to the kind of information required to solve their ambiguities

    Motion clouds: model-based stimulus synthesis of natural-like random textures for the study of motion perception

    Full text link
    Choosing an appropriate set of stimuli is essential to characterize the response of a sensory system to a particular functional dimension, such as the eye movement following the motion of a visual scene. Here, we describe a framework to generate random texture movies with controlled information content, i.e., Motion Clouds. These stimuli are defined using a generative model that is based on controlled experimental parametrization. We show that Motion Clouds correspond to dense mixing of localized moving gratings with random positions. Their global envelope is similar to natural-like stimulation with an approximate full-field translation corresponding to a retinal slip. We describe the construction of these stimuli mathematically and propose an open-source Python-based implementation. Examples of the use of this framework are shown. We also propose extensions to other modalities such as color vision, touch, and audition

    Functional properties of feed-forward inhibition

    No full text
    ISBN : 978-2-9532965-0-1Neurons receive a large number of excitatory and inhibitory synaptic inputs whose temporal interplay determines the spiking behavior. On average, excitation and inhibition balance each other, such that spikes are elicited by fluctuations. In addition, it has been shown in vivo that excitation and inhibition are correlated, with inhibition lagging excitation only by few milliseconds (~6 ms), creating a small temporal integration window. This correlation structure could be induced by feed-forward inhibition (FFI), which has been shown to be present at many sites in the central nervous system. To characterize the functional properties of feed-forward inhibition, we constructed a simple circuit using spiking neurons with conductance based synapses and applied spike pulse packets with defined strength and width. We found that the small temporal integration window, induced by the FFI, changes the integrative properties of the neuron. Only transient stimuli could produce a response when the FFI was active, whereas without FFI the neuron responded to both steady and transient stimuli. In addition, the FFI increased the trial-by-trial precision

    Towards a bio-inspired evaluation methodology for motion estimation models

    Get PDF
    Offering proper evaluation methodology is essential to continue progress in modelling neural mechanisms in vision/visual information processing. Currently, evaluation of motion estimation models lacks a proper methodology for comparing their performance against the visual system. Here, we set the basis for such a new benchmark methodology which is based on human visual performance as measured in psychophysics, ocular following and neurobiology. This benchmark will enable comparisons between different kinds of models, but also it will challenge current motion estimation models and better characterize their properties with respect to visual cortex performance. To do so, we propose a database of image sequences taken from neuroscience and psychophysics literature. In this article, we focus on two aspects of motion estimation, which are the dynamics of motion integration and the respective influence between 1D versus 2D cues. Then, since motion models possibly deal with different kinds of motion representations and scale, we define here two general readouts based on a global motion estimation. Such readouts, namely eye movements and perceived motion will serve as a reference to compare simulated and experimental data. We evaluate the performance of several models on this data to establish the current state of the art. Models chosen for comparison have very different properties and internal mechanisms, such as feedforward normalisation of V1 and MT processing and recurrent feedback. As a whole, we provide here the basis for a valuable evaluation methodology to unravel the fundamental mechanisms of the visual cortex in motion perception. Our database is freely available on the web together with scoring instructions and results at http://www-sop.inria.fr/neuromathcomp/software/motionpsychobenchOffrir une méthodologie d'évaluation est essentiel pour la recherche en modélisation des mécanismes neuraux impliqués dans la vision. Actuellement, il manque à l'évaluation des modÚles d'estimation du mouvement une méthodologie bien définie permettant de comparer leurs performances avec celles du systÚme visuel. Ici nous posons les bases d'un tel banc d'essai, basé sur les performances visuelles des humains telles que mesurées en psychophysique, en oculo-motricité, et en neurobiologie. Ce banc d'essai permettra de comparer différents modÚles, mais aussi de mieux caractériser leurs propriétés en regard du comportement du systÚme visuel. Dans ce but, nous proposons un ensemble de séquences vidéos, issues des expérimentations en neurosciences et en psychophysique. Dans cet article, nous mettons l'accent sur deux principaux aspects de l'estimation du mouvement~: les dynamiques d'intégration du mouvement, et les influences respectives des informations 1D par rapport aux informations 2D. De là, nous définissons deux «~lectures~» basés sur l'estimation du mouvement global. De telles «~lectures~», nommément les mouvements des yeux, et le mouvement perçu, serviront de référence pour comparer les données expérimentales et simulées. Nous évaluons les performances de différents modÚles sur ces stimuli afin d'établir un état de l'art des modÚles d'intégration du mouvement. Les modÚles comparés sont choisis en fonction de leurs grandes différences en terme de propriétes et de mécanismes internes (rétroaction, normalisation). En définitive, nous établissons dans ce travail les bases d'une méthodologie d'évaluation permettant de découvrir les mécanismes fondamentaux du cortex visuel dédié à la perception du mouvement. Notre jeu de stimuli est librement accessible sur Internet, accompagné d'instructions pour l'évaluation, et de résultats, à l'adresse~: http://www-sop.inria.fr/neuromathcomp/software/motionpsychobenc

    A Simple Mechanism to Reproduce the Neural Solution of the Aperture Problem in Monkey Area MT [ RR-6579]

    Get PDF
    We propose a simple mechanism to reproduce the neural solution of the aperture problem in monkey area MT. More precisely, our goal is to propose a model able to reproduce the dynamical change of the preferred direction (PD) of a MT cell depending on the motion information contained in the input stimulus. The PD of a MT cell measured through drifting gratings differs of the one measured using a barberpole, which is highly related with its aspect ratio. For a barberpole, the PD evolves from the perpendicular direction of the drifting grating to a PD shifted according to the aspect ratio of the barberpole. The mechanisms underlying this dynamic are unknown (lateral connections, surround suppression, feed-backs from higher layers). Here, we show that a simple mechanism such as surround-inhibition in V1 neurons can produce a significant shift in the PD of MT neurons as observed with barberpoles of different aspect ratio

    Speed Estimation for Visual Tracking Emerges Dynamically from Nonlinear Frequency Interactions

    Get PDF
    Sensing the movement of fast objects within our visual environments is essential for controlling actions. It requires online estimation of motion direction and speed. We probed human speed representation using ocular tracking of stimuli of different statistics. First, we compared ocular responses to single drifting gratings (DGs) with a given set of spatiotemporal frequencies to broadband motion clouds (MCs) of matched mean frequencies. Motion energy distributions of gratings and clouds are point-like, and ellipses oriented along the constant speed axis, respectively. Sampling frequency space, MCs elicited stronger, less variable, and speed-tuned responses. DGs yielded weaker and more frequency-tuned responses. Second, we measured responses to patterns made of two or three components covering a range of orientations within Fourier space. Early tracking initiation of the patterns was best predicted by a linear combination of components before nonlinear interactions emerged to shape later dynamics. Inputs are supralinearly integrated along an iso-velocity line and sublinearly integrated away from it. A dynamical probabilistic model characterizes these interactions as an excitatory pooling along the iso-velocity line and inhibition along the orthogonal “scale” axis. Such crossed patterns of interaction would appropriately integrate or segment moving objects. This study supports the novel idea that speed estimation is better framed as a dynamic channel interaction organized along speed and scale axes
    • 

    corecore