224 research outputs found

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discotinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and VIP can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (NSF SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Temporal Dynamics of Decision-Making during Motion Perception in the Visual Cortex

    Get PDF
    How does the brain make decisions? Speed and accuracy of perceptual decisions covary with certainty in the input, and correlate with the rate of evidence accumulation in parietal and frontal cortical "decision neurons." A biophysically realistic model of interactions within and between Retina/LGN and cortical areas V1, MT, MST, and LIP, gated by basal ganglia, simulates dynamic properties of decision-making in response to ambiguous visual motion stimuli used by Newsome, Shadlen, and colleagues in their neurophysiological experiments. The model clarifies how brain circuits that solve the aperture problem interact with a recurrent competitive network with self-normalizing choice properties to carry out probablistic decisions in real time. Some scientists claim that perception and decision-making can be described using Bayesian inference or related general statistical ideas, that estimate the optimal interpretation of the stimulus given priors and likelihoods. However, such concepts do not propose the neocortical mechanisms that enable perception, and make decisions. The present model explains behavioral and neurophysiological decision-making data without an appeal to Bayesian concepts and, unlike other existing models of these data, generates perceptual representations and choice dynamics in response to the experimental visual stimuli. Quantitative model simulations include the time course of LIP neuronal dynamics, as well as behavioral accuracy and reaction time properties, during both correct and error trials at different levels of input ambiguity in both fixed duration and reaction time tasks. Model MT/MST interactions compute the global direction of random dot motion stimuli, while model LIP computes the stochastic perceptual decision that leads to a saccadic eye movement.National Science Foundation (SBE-0354378, IIS-02-05271); Office of Naval Research (N00014-01-1-0624); National Institutes of Health (R01-DC-02852

    Combining Head-Mounted and Projector-Based Displays for Surgical Training

    Get PDF
    We introduce and present preliminary results for a hybrid display system combining head-mounted and projector-based displays. Our work is motivated by a surgical training application, where it is necessary to simultaneously provide both a high-fidelity view of a central close-up task (the surgery) and visual awareness of objects and events in the surrounding environment. In particular, for trauma surgeons it would be valuable to learn to work in an environment that is realistically filled with both necessary and distracting objects and events

    Processing of natural temporal stimuli by macaque retinal ganglion cells

    Get PDF
    This study quantifies the performance of primate retinal ganglion cells in response to natural stimuli. Stimuli were confined to the temporal and chromatic domains and were derived from two contrasting environments, one typically northern European and the other a flower show. The performance of the cells was evaluated by investigating variability of cell responses to repeated stimulus presentations and by comparing measured to model responses. Both analyses yielded a quantity called the coherence rate (in bits per second), which is related to the information rate. Magnocellular (MC) cells yielded coherence rates of up to 100 bits/sec, rates of parvocellular (PC) cells were much lower, and short wavelength (S)-cone-driven ganglion cells yielded intermediate rates. The modeling approach showed that for MC cells, coherence rates were generated almost exclusively by the luminance content of the stimulus. Coherence rates of PC cells were also dominated by achromatic content. This is a consequence of the stimulus structure; luminance varied much more in the natural environment than chromaticity. Only approximately one-sixth of the coherence rate of the PC cells derived from chromatic content, and it was dominated by frequencies below 10 Hz. S-cone-driven ganglion cells also yielded coherence rates dominated by low frequencies. Below 2–3 Hz, PC cell signals contained more power than those of MC cells. Response variation between individual ganglion cells of a particular class was analyzed by constructing generic cells, the properties of which may be relevant for performance higher in the visual system. The approach used here helps define retinal modules useful for studies of higher visual processing of natural stimuli

    Characteristics of flight simulator visual systems

    Get PDF
    The physical parameters of the flight simulator visual system that characterize the system and determine its fidelity are identified and defined. The characteristics of visual simulation systems are discussed in terms of the basic categories of spatial, energy, and temporal properties corresponding to the three fundamental quantities of length, mass, and time. Each of these parameters are further addressed in relation to its effect, its appropriate units or descriptors, methods of measurement, and its use or importance to image quality

    Fast and interactive ray-based rendering

    Get PDF
    This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonDespite their age, ray-based rendering methods are still a very active field of research with many challenges when it comes to interactive visualization. In this thesis, we present our work on Guided High-Quality Rendering, Foveated Ray Tracing for Head Mounted Displays and Hash-based Hierarchical Caching and Layered Filtering. Our system for Guided High-Quality Rendering allows for guiding the sampling rate of ray-based rendering methods by a user-specified Region of Interest (RoI). We propose two interaction methods for setting such an RoI when using a large display system and a desktop display, respectively. This makes it possible to compute images with a heterogeneous sample distribution across the image plane. Using such a non-uniform sample distribution, the rendering performance inside the RoI can be significantly improved in order to judge specific image features. However, a modified scheduling method is required to achieve sufficient performance. To solve this issue, we developed a scheduling method based on sparse matrix compression, which has shown significant improvements in our benchmarks. By filtering the sparsely sampled image appropriately, large brightness variations in areas outside the RoI are avoided and the overall image brightness is similar to the ground truth early in the rendering process. When using ray-based methods in a VR environment on head-mounted display de vices, it is crucial to provide sufficient frame rates in order to reduce motion sickness. This is a challenging task when moving through highly complex environments and the full image has to be rendered for each frame. With our foveated rendering sys tem, we provide a perception-based method for adjusting the sample density to the user’s gaze, measured with an eye tracker integrated into the HMD. In order to avoid disturbances through visual artifacts from low sampling rates, we introduce a reprojection-based rendering pipeline that allows for fast rendering and temporal accumulation of the sparsely placed samples. In our user study, we analyse the im pact our system has on visual quality. We then take a closer look at the recorded eye tracking data in order to determine tracking accuracy and connections between different fixation modes and perceived quality, leading to surprising insights. For previewing global illumination of a scene interactively by allowing for free scene exploration, we present a hash-based caching system. Building upon the concept of linkless octrees, which allow for constant-time queries of spatial data, our frame work is suited for rendering such previews of static scenes. Non-diffuse surfaces are supported by our hybrid reconstruction approach that allows for the visualization of view-dependent effects. In addition to our caching and reconstruction technique, we introduce a novel layered filtering framework, acting as a hybrid method between path space and image space filtering, that allows for the high-quality denoising of non-diffuse materials. Also, being designed as a framework instead of a concrete filtering method, it is possible to adapt most available denoising methods to our layered approach instead of relying only on the filtering of primary hitpoints

    Guiding Attention in Controlled Real-World Environments

    Get PDF
    The ability to direct a viewer\u27s attention has important applications in computer graphics, data visualization, image analysis, and training. Existing computer-based gaze manipulation techniques, which direct a viewer\u27s attention about a display, have been shown to be effective for spatial learning, search task completion, and medical training applications. This work extends the concept of gaze manipulation beyond digital imagery to include controlled, real-world environments. This work addresses the main challenges in guiding attention to real-world objects: determining what object the viewer is currently paying attention to, and providing (projecting) a visual cue on a different part of the scene in order to draw the viewer\u27s attention there. The developed system consists of a pair of eye-tracking glasses to determine the viewer\u27s gaze location, and a projector to create the visual cue in the physical environment. The results of a user study show that the system is effective for directing a viewer\u27s gaze in the real-world. The successful implementation has applicability in a wide range of instructional environments, including pilot training and driving simulators

    Recasting covert visual attention effects from the perspective of fixational oculomotor dynamics: Theory and experiments

    Get PDF
    Traditionally, a great many studies of visual attention have used reaction time measures (either with manual button presses or saccadic eye movements) to make inferences about the locus and time course of attentional allocation. One classic example of such studies is the Posner cueing paradigm (Posner 1980), in which subjects maintain fixation and a cue is presented on one side or the other of space; a post-cue target appearing at different times and locations is used to elicit a reaction time and map the spatial and temporal development of cue-induced changes in internal brain state. However, tasks with prolonged fixation inevitably involve fixational eye movements, like microsaccades. Since microsaccades are the same as saccades, and are therefore associated with peri-movement changes in internal brain state, an imperative question we should ask is: how much of performance changes in tasks like Posner cueing may actually be attributable to peri-movement changes in vision associated with microsaccades? And, if this turns out to be a real, plausible possibility, can we predict, on a trial-by-trial basis, when and where microsaccades can occur, and therefore when and where performance changes in Posner cueing might be expected to take place? In order to investigate these questions, we started our Study I, which is a combined study including modeling simulations and behavioral psychophysics. Based on a minimalist model of oculomotor generation (microsaccades) without any other factors (i.e. knowledge about where attention is “supposed” to be allocated), we successfully simulated attentional effects and replicated all detailed observations in the classic Posner cueing paradigm. This means that from a theoretical perspective, classic concepts in cognitive neuroscience like “attentional capture (AC)” and “Inhibition of return (IOR)” become the outcomes of peri-microsaccadic enhancement or suppression of neural visual sensitivity. We next turned to the question of why microsaccades might be modulated in Posner cueing at all; can we predict when and where microsaccades should be seen? In Study II, we experimentally controlled instantaneous foveal motor error during the presentation of peripheral cues. Post-cue microsaccadic oscillations were severely disrupted, suggesting that microsaccades in Posner cueing occur for oculomotor control over foveal motor error and not necessarily because they form a “dirty” read-out of covert attention, as commonly assumed. We then went one step further in Study III, in which we delved deeper into the mechanisms for fixational eye position dynamics, and how they dictate when microsaccades occur (and therefore when performance changes in Posner cueing might be expected). We discovered a new phenomenon of “express microsaccades” that were highly precise in time and direction. We used this discovery to refine our understanding of why microsaccades might be triggered during Posner cueing, showing that there is an oculomotor “set point” that is very systematically modulated at different times after cue onset, and that the instantaneous relationship between eye position and this set point is sufficient to explain when and where microsaccades would be observed. Overall, our work takes a classic phenomenon in cognitive neuroscience, covert attention as studied with Posner cueing, and significantly recasts it from a completely different perspective related to the highly detailed workings of the oculomotor system during the simple act of gaze fixation. Our work has significant implications on potential neural correlates of covert visual attention and fixational eye position dynamics in the brain
    • 

    corecore