24 research outputs found

    Model investigation on contribution of feedback in distortion induced motion adaptation

    Get PDF
    Motion information is processed in a neural circuit formed by synaptic organization of feedforward (FF) and feedback (FB) connections between different cortical areas. However, the contribution of a recurrent FB information to adaptation process is not well explored. Here, a biologically plausible neural model that predicts motion adaptation aftereffect (MAE) induced by exposure to geometrically skewed natural image sequences is suggested. The model constitutes two stage recurrent motion processing within cortical areas V1 and MT [1]. It comprises FF excitatory, FB modulatory and lateral inhibitory connections, and a fast and a slow adaptive synapse in the FF and FB streams, respectively, to introduce plasticity. Simulation results of the model show the following main contributions of FB in distortion induced motion adaptation: FB disambiguates the main signal from a noisy natural stimulus input: results in adaptation to globally consistent salient information. A model with distinct adaptive mechanisms in FF and FB streams predicts MAE at different time scales of exposure to skewed natural stimuli more accurately than other model variants constituting single adaptive mechanism: Multiple adaptive mechanisms might be implemented via FB pathways. FB allows similar response tuning in model area V1 and MT during adaptation in line with physiological findings [2]. [1] Bayerl, P. and H. Neumann, Disambiguating visual motion through contextual feedback modulation. Neural computation, 2004. 16(10): p. 2041-2066. [2] Patterson, C.A., et al., Similar adaptation effects in primary visual cortex and area MT of the macaque monkey under matched stimulus conditions. Journal of neurophysiology, 2013. 111(6): p. 1203-1213

    Reconciling Predictive Coding and Biased Competition Models of Cortical Function

    Get PDF
    A simple variation of the standard biased competition model is shown, via some trivial mathematical manipulations, to be identical to predictive coding. Specifically, it is shown that a particular implementation of the biased competition model, in which nodes compete via inhibition that targets the inputs to a cortical region, is mathematically equivalent to the linear predictive coding model. This observation demonstrates that these two important and influential rival theories of cortical function are minor variations on the same underlying mathematical model

    Data modelling and data processing generated by human eye movements

    Get PDF
    Data modeling and data processing are important activities in any scientific research. This research focuses on the modeling of data and processing of data generated by a saccadometer. The approach used is based on the relational data model, but the processing and storage of the data is done with client datasets. The experiments were performed with 26 randomly selected files from a total of 264 experimental sessions. The data from each experimental session was stored in three different formats, respectively text, binary and extensible markup language (XML) based. The results showed that the text format and the binary format were the most compact. Several actions related to data processing were analyzed. Based on the results obtained, it was found that the two fastest actions are respectively loading data from a binary file and storing data into a binary file. In contrast, the two slowest actions were storing the data in XML format and loading the data from a text file, respectively. Also, one of the time-consuming operations turned out to be the conversion of data from text format to binary format. Moreover, the time required to perform this action does not depend in proportion on the number of records processed

    Towards a bio-inspired evaluation methodology for motion estimation models

    Get PDF
    Offering proper evaluation methodology is essential to continue progress in modelling neural mechanisms in vision/visual information processing. Currently, evaluation of motion estimation models lacks a proper methodology for comparing their performance against the visual system. Here, we set the basis for such a new benchmark methodology which is based on human visual performance as measured in psychophysics, ocular following and neurobiology. This benchmark will enable comparisons between different kinds of models, but also it will challenge current motion estimation models and better characterize their properties with respect to visual cortex performance. To do so, we propose a database of image sequences taken from neuroscience and psychophysics literature. In this article, we focus on two aspects of motion estimation, which are the dynamics of motion integration and the respective influence between 1D versus 2D cues. Then, since motion models possibly deal with different kinds of motion representations and scale, we define here two general readouts based on a global motion estimation. Such readouts, namely eye movements and perceived motion will serve as a reference to compare simulated and experimental data. We evaluate the performance of several models on this data to establish the current state of the art. Models chosen for comparison have very different properties and internal mechanisms, such as feedforward normalisation of V1 and MT processing and recurrent feedback. As a whole, we provide here the basis for a valuable evaluation methodology to unravel the fundamental mechanisms of the visual cortex in motion perception. Our database is freely available on the web together with scoring instructions and results at http://www-sop.inria.fr/neuromathcomp/software/motionpsychobenchOffrir une méthodologie d'évaluation est essentiel pour la recherche en modélisation des mécanismes neuraux impliqués dans la vision. Actuellement, il manque à l'évaluation des modèles d'estimation du mouvement une méthodologie bien définie permettant de comparer leurs performances avec celles du système visuel. Ici nous posons les bases d'un tel banc d'essai, basé sur les performances visuelles des humains telles que mesurées en psychophysique, en oculo-motricité, et en neurobiologie. Ce banc d'essai permettra de comparer différents modèles, mais aussi de mieux caractériser leurs propriétés en regard du comportement du système visuel. Dans ce but, nous proposons un ensemble de séquences vidéos, issues des expérimentations en neurosciences et en psychophysique. Dans cet article, nous mettons l'accent sur deux principaux aspects de l'estimation du mouvement~: les dynamiques d'intégration du mouvement, et les influences respectives des informations 1D par rapport aux informations 2D. De là, nous définissons deux «~lectures~» basés sur l'estimation du mouvement global. De telles «~lectures~», nommément les mouvements des yeux, et le mouvement perçu, serviront de référence pour comparer les données expérimentales et simulées. Nous évaluons les performances de différents modèles sur ces stimuli afin d'établir un état de l'art des modèles d'intégration du mouvement. Les modèles comparés sont choisis en fonction de leurs grandes différences en terme de propriétes et de mécanismes internes (rétroaction, normalisation). En définitive, nous établissons dans ce travail les bases d'une méthodologie d'évaluation permettant de découvrir les mécanismes fondamentaux du cortex visuel dédié à la perception du mouvement. Notre jeu de stimuli est librement accessible sur Internet, accompagné d'instructions pour l'évaluation, et de résultats, à l'adresse~: http://www-sop.inria.fr/neuromathcomp/software/motionpsychobenc

    Visual motion processing and human tracking behavior

    Full text link
    The accurate visual tracking of a moving object is a human fundamental skill that allows to reduce the relative slip and instability of the object's image on the retina, thus granting a stable, high-quality vision. In order to optimize tracking performance across time, a quick estimate of the object's global motion properties needs to be fed to the oculomotor system and dynamically updated. Concurrently, performance can be greatly improved in terms of latency and accuracy by taking into account predictive cues, especially under variable conditions of visibility and in presence of ambiguous retinal information. Here, we review several recent studies focusing on the integration of retinal and extra-retinal information for the control of human smooth pursuit.By dynamically probing the tracking performance with well established paradigms in the visual perception and oculomotor literature we provide the basis to test theoretical hypotheses within the framework of dynamic probabilistic inference. We will in particular present the applications of these results in light of state-of-the-art computer vision algorithms

    Cortical Dynamics of Navigation and Steering in Natural Scenes: Motion-Based Object Segmentation, Heading, and Obstacle Avoidance

    Full text link
    Visually guided navigation through a cluttered natural scene is a challenging problem that animals and humans accomplish with ease. The ViSTARS neural model proposes how primates use motion information to segment objects and determine heading for purposes of goal approach and obstacle avoidance in response to video inputs from real and virtual environments. The model produces trajectories similar to those of human navigators. It does so by predicting how computationally complementary processes in cortical areas MT-/MSTv and MT+/MSTd compute object motion for tracking and self-motion for navigation, respectively. The model retina responds to transients in the input stream. Model V1 generates a local speed and direction estimate. This local motion estimate is ambiguous due to the neural aperture problem. Model MT+ interacts with MSTd via an attentive feedback loop to compute accurate heading estimates in MSTd that quantitatively simulate properties of human heading estimation data. Model MT interacts with MSTv via an attentive feedback loop to compute accurate estimates of speed, direction and position of moving objects. This object information is combined with heading information to produce steering decisions wherein goals behave like attractors and obstacles behave like repellers. These steering decisions lead to navigational trajectories that closely match human performance.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National Geospatial Intelligence Agency (NMA201-01-1-2016

    A Neural Model of How the Brain Computes Heading from Optic Flow in Realistic Scenes

    Full text link
    Animals avoid obstacles and approach goals in novel cluttered environments using visual information, notably optic flow, to compute heading, or direction of travel, with respect to objects in the environment. We present a neural model of how heading is computed that describes interactions among neurons in several visual areas of the primate magnocellular pathway, from retina through V1, MT+, and MSTd. The model produces outputs which are qualitatively and quantitatively similar to human heading estimation data in response to complex natural scenes. The model estimates heading to within 1.5° in random dot or photo-realistically rendered scenes and within 3° in video streams from driving in real-world environments. Simulated rotations of less than 1 degree per second do not affect model performance, but faster simulated rotation rates deteriorate performance, as in humans. The model is part of a larger navigational system that identifies and tracks objects while navigating in cluttered environments.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National-Geospatial Intelligence Agency (NMA201-01-1-2016

    Object Segmentation from Motion Discontinuities and Temporal Occlusions–A Biologically Inspired Model

    Get PDF
    BACKGROUND: Optic flow is an important cue for object detection. Humans are able to perceive objects in a scene using only kinetic boundaries, and can perform the task even when other shape cues are not provided. These kinetic boundaries are characterized by the presence of motion discontinuities in a local neighbourhood. In addition, temporal occlusions appear along the boundaries as the object in front covers the background and the objects that are spatially behind it. METHODOLOGY/PRINCIPAL FINDINGS: From a technical point of view, the detection of motion boundaries for segmentation based on optic flow is a difficult task. This is due to the problem that flow detected along such boundaries is generally not reliable. We propose a model derived from mechanisms found in visual areas V1, MT, and MSTl of human and primate cortex that achieves robust detection along motion boundaries. It includes two separate mechanisms for both the detection of motion discontinuities and of occlusion regions based on how neurons respond to spatial and temporal contrast, respectively. The mechanisms are embedded in a biologically inspired architecture that integrates information of different model components of the visual processing due to feedback connections. In particular, mutual interactions between the detection of motion discontinuities and temporal occlusions allow a considerable improvement of the kinetic boundary detection. CONCLUSIONS/SIGNIFICANCE: A new model is proposed that uses optic flow cues to detect motion discontinuities and object occlusion. We suggest that by combining these results for motion discontinuities and object occlusion, object segmentation within the model can be improved. This idea could also be applied in other models for object segmentation. In addition, we discuss how this model is related to neurophysiological findings. The model was successfully tested both with artificial and real sequences including self and object motion
    corecore