518 research outputs found

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discotinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and VIP can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (NSF SBE-0354378); Office of Naval Research (N00014-01-1-0624

    A Neural Model of Motion Processing and Visual Navigation by Cortical Area MST

    Full text link
    Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually-guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals, and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves, and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.Defense Research Projects Agency (N00014-92-J-4015); Office of Naval Research (N00014-92-J-1309, N00014-95-1-0409, N00014-95-1-0657, N00014-91-J-4100, N0014-94-I-0597); Air Force Office of Scientific Research (F49620-92-J-0334)

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discontinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and posterior parietal cortex can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    The role of direction-selective visual interneurons T4 and T5 in Drosophila orientation behavior

    Get PDF
    In order to safely move through the environment, visually-guided animals use several types of visual cues for orientation. Optic flow provides faithful information about ego-motion and can thus be used to maintain a straight course. Additionally, local motion cues or landmarks indicate potentially interesting targets or signal danger, triggering approach or avoidance, respectively. The visual system must reliably and quickly evaluate these cues and integrate this information in order to orchestrate behavior. The underlying neuronal computations for this remain largely inaccessible in higher organisms, such as in humans, but can be studied experimentally in more simple model species. The fly Drosophila, for example, heavily relies on such visual cues during its impressive flight maneuvers. Additionally, it is genetically and physiologically accessible. Hence, it can be regarded as an ideal model organism for exploring neuronal computations during visual processing. In my PhD studies, I have designed and built several autonomous virtual reality setups to precisely measure visual behavior of walking flies. The setups run in open-loop and in closed-loop configuration. In an open-loop experiment, the visual stimulus is clearly defined and does not depend on the behavioral response. Hence, it allows mapping of how specific features of simple visual stimuli are translated into behavioral output, which can guide the creation of computational models of visual processing. In closedloop experiments, the behavioral response is fed back onto the visual stimulus, which permits characterization of the behavior under more realistic conditions and, thus, allows for testing of the predictive power of the computational models. In addition, Drosophila’s genetic toolbox provides various strategies for targeting and silencing specific neuron types, which helps identify which cells are needed for a specific behavior. We have focused on visual interneuron types T4 and T5 and assessed their role in visual orientation behavior. These neurons build up a retinotopic array and cover the whole visual field of the fly. They constitute major output elements from the medulla and have long been speculated to be involved in motion processing. This cumulative thesis consists of three published studies: In the first study, we silenced both T4 and T5 neurons together and found that such flies were completely blind to any kind of motion. In particular, these flies could not perform an optomotor response anymore, which means that they lost their normally innate following responses to motion of large-field moving patterns. This was an important finding as it ruled out the contribution of another system for motion vision-based behaviors. However, these flies were still able to fixate a black bar. We could show that this behavior is mediated by a T4/T5-independent flicker detection circuitry which exists in parallel to the motion system. In the second study, T4 and T5 neurons were characterized via twophoton imaging, revealing that these cells are directionally selective and have very similar temporal and orientation tuning properties to directionselective neurons in the lobula plate. T4 and T5 cells responded in a contrast polarity-specific manner: T4 neurons responded selectively to ON edge motion while T5 neurons responded only to OFF edge motion. When we blocked T4 neurons, behavioral responses to moving ON edges were more impaired than those to moving OFF edges and the opposite was true for the T5 block. Hence, these findings confirmed that the contrast polarityspecific visual motion pathways, which start at the level of L1 (ON) and L2 (OFF), are maintained within the medulla and that motion information is computed twice independently within each of these pathways. Finally, in the third study, we used the virtual reality setups to probe the performance of an artificial microcircuit. The system was equipped with a camera and spherical fisheye lens. Images were processed by an array of Reichardt detectors whose outputs were integrated in a similar way to what is found in the lobula plate of flies. We provided the system with several rotating natural environments and found that the fly-inspired artificial system could accurately predict the axes of rotation

    A multidisciplinary approach to the study of shape and motion processing and representation in rats

    Get PDF
    During my PhD I investigated how shape and motion information are processed by the rat visual system, so as to establish how advanced is the representation of higher-order visual information in this species and, ultimately, to understand to what extent rats can present a valuable alternative to monkeys, as experimental models, in vision studies. Specifically, in my thesis work, I have investigated: 1) The possible visual strategies underlying shape recognition. 2) The ability of rat visual cortical areas to represent motion and shape information. My work contemplated two different, but complementary experimental approaches: psychophysical measurements of the rat\u2019s recognition ability and strategy, and in vivo extracellular recordings in anaesthetized animals passively exposed to various (static and moving) visual stimulation. The first approach implied training the rats to an invariant object recognition task, i.e. to tolerate different ranges of transformations in the object\u2019s appearance, and the application of an mage classification technique known as The Bubbles to reveal the visual strategy the animals were able, under different conditions of stimulus discriminability, to adopt in order to perform the task. The second approach involved electrophysiological exploration of different visual areas in the rat\u2019s cortex, in order to investigate putative functional hierarchies (or streams of processing) in the computation of motion and shape information. Results show, on one hand, that rats are able, under conditions of highly stimulus discriminability, to adopt a shape-based, view-invariant, multi-featural recognition strategy; on the other hand, the functional properties of neurons recorded from different visual areas suggest the presence of a putative shape-based, ventral-like stream of processing in the rat\u2019s visual cortex. The general purpose of my work is and has been the unveiling the neural mechanisms that make object recognition happen, with the goal of eventually 1) be able to relate my findings on rats to those on more visually-advanced species, such as human and non-human primates; and 2) collect enough biological data to support the artificial simulation of visual recognition processes, which still presents an important scientific challenge

    Towards building a more complex view of the lateral geniculate nucleus: Recent advances in understanding its role

    Get PDF
    The lateral geniculate nucleus (LGN) has often been treated in the past as a linear filter that adds little to retinal processing of visual inputs. Here we review anatomical, neurophysiological, brain imaging, and modeling studies that have in recent years built up a much more complex view of LGN . These include effects related to nonlinear dendritic processing, cortical feedback, synchrony and oscillations across LGN populations, as well as involvement of LGN in higher level cognitive processing. Although recent studies have provided valuable insights into early visual processing including the role of LGN, a unified model of LGN responses to real-world objects has not yet been developed. In the light of recent data, we suggest that the role of LGN deserves more careful consideration in developing models of high-level visual processing
    • …
    corecore