908 research outputs found
How Does the Cerebral Cortex Work? Developement, Learning, Attention, and 3D Vision by Laminar Circuits of Visual Cortex
A key goal of behavioral and cognitive neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how the visual cortex sees. Visual cortex, like many parts of perceptual and cognitive neocortex, is organized into six main layers of cells, as well as characteristic sub-lamina. Here it is proposed how these layered circuits help to realize the processes of developement, learning, perceptual grouping, attention, and 3D vision through a combination of bottom-up, horizontal, and top-down interactions. A key theme is that the mechanisms which enable developement and learning to occur in a stable way imply properties of adult behavior. These results thus begin to unify three fields: infant cortical developement, adult cortical neurophysiology and anatomy, and adult visual perception. The identified cortical mechanisms promise to generalize to explain how other perceptual and cognitive processes work.Air Force Office of Scientific Research (F49620-01-1-0397); Office of Naval Research (N00014-01-1-0624
Recommended from our members
Binocular integration using stereo motion cues to drive behavior in mice
The visual system presents an opportunity to study how two signals converge to generate a novel representation of the world: depth. The slight difference in positions between the two eyes means that different images are encoded by the left and right eyes by generating disparity signals. Another way to generate depth signals is by presenting different motion signals to the two eyes. Even though the binocular visual system has been studied for a long time, the mechanisms behind binocular integration when objects move in depth are largely unknown. In this dissertation, I demonstrate a new model for studying motion-in-depth signals using mice. Mice are an attractive animal to study the binocular visual system not only because they share common visual pathway as primates and other mammals, but also because there are genetic tools that can be used to study the underlying circuitry for binocular integration during motion-in-depth cues. Thus far there have been very few studies regarding binocularity in mice. This dissertation will focus on the behavioral output during stereoscopic motion-in-depth signals in mice and investigate visual areas involved in these behaviors. In the first section, I investigate whether mice discriminate motion-in-depth signals like primates, using disparity and motion signals presented to each eye. I find that mice are able to discriminate towards and away stimuli and that the binocular neurons in the visual cortex were critical for the computation of this signal. In the second section we measured optokinetic eye movement generated by motion-in-depth stimulus. I found that vergence eye movement in mice is driven primarily by the motion signals presented in each eye. This phenomenon can be explained largely by the summation of monocular motor signals of the two eyes that happens subcortically. These two experiments both show clear behavioral output that can be only generated when presented with binocular motion-in-depth signals. I find both cortical and subcortical components of binocular integration that are responsible for the generation of these behavior outputs which demonstrates the complicated nature of binocular integration associated with motion-in-depth signals. My work in this dissertation provides the foundation for studying binocular integration in rodentsNeuroscienc
Objective evaluation criteria for stereo camera shooting quality under different shooting parameters and shooting distances
The vigorous development of 3D technology has improved the photography technology of stereo cameras constantly. However, there are no widely recognized objective evaluation criteria for stereo camera shooting quality under different shooting parameters and shooting distances. At the same time, no shooting guideline can be used for reference when people take stereoscopic images. To solve this problem, we propose the objective evaluation criteria of shooting quality of two types of stereo cameras (parallel and toed-in camera configurations) under three shooting conditions (macro shooting, short, and long distance shooting). In our work, several prominent evaluation factors are built by analyzing the characteristics of each shooting condition. Based on the effective five-point scale used in our subjective experiments, the relationships between shooting factors and shooting quality are obtained and then effectively integrated together to build the overall evaluation criteria. Finally, extensive experiments have been conducted, and the results demonstrate that the proposed approach can effectively evaluate the shooting quality of stereo cameras
Engineering data compendium. Human perception and performance. User's guide
The concept underlying the Engineering Data Compendium was the product of a research and development program (Integrated Perceptual Information for Designers project) aimed at facilitating the application of basic research findings in human performance to the design and military crew systems. The principal objective was to develop a workable strategy for: (1) identifying and distilling information of potential value to system design from the existing research literature, and (2) presenting this technical information in a way that would aid its accessibility, interpretability, and applicability by systems designers. The present four volumes of the Engineering Data Compendium represent the first implementation of this strategy. This is the first volume, the User's Guide, containing a description of the program and instructions for its use
D-SAV360: A Dataset of Gaze Scanpaths on 360° Ambisonic Videos
Understanding human visual behavior within virtual reality environments is crucial to fully leverage their potential. While previous research has provided rich visual data from human observers, existing gaze datasets often suffer from the absence of multimodal stimuli. Moreover, no dataset has yet gathered eye gaze trajectories (i.e., scanpaths) for dynamic content with directional ambisonic sound, which is a critical aspect of sound perception by humans. To address this gap, we introduce D-SAV360, a dataset of 4,609 head and eye scanpaths for 360° videos with first-order ambisonics. This dataset enables a more comprehensive study of multimodal interaction on visual behavior in virtual reality environments. We analyze our collected scanpaths from a total of 87 participants viewing 85 different videos and show that various factors such as viewing mode, content type, and gender significantly impact eye movement statistics. We demonstrate the potential of D-SAV360 as a benchmarking resource for state-of-the-art attention prediction models and discuss its possible applications in further research. By providing a comprehensive dataset of eye movement data for dynamic, multimodal virtual environments, our work can facilitate future investigations of visual behavior and attention in virtual reality
Aerospace medicine and biology: A continuing bibliography with indexes (supplement 338)
This bibliography lists 139 reports, articles and other documents introduced into the NASA Scientific and Technical Information System during June 1990. Subject coverage includes: aerospace medicine and psychology, life support systems and controlled environments, safety equipment, exobiology and extraterrestrial life, and flight crew behavior and performance
First and Second Order Stereoscopic Processing of Fused and Diplopic Targets
Depth from stereopsis is due to the positional difference between the two eyes, which results in each eye receiving a different view of the world. Although progress has been made in understanding how the visual system processes stereoscopic stimuli, a number of questions remain. The goal of this work was to assess the relationship between the perceptual, the temporal and the 1st- /2nd- order dichotomies of stereopsis and in doing so, determine an appropriate method for measuring depth from large disparities. To this end, stereosensitivity and perceived depth were assessed using 1st- and 2nd- order stimuli over a range of test disparities and conditions. The main contributions of this research are as follows: 1) The sustained/transient dichotomy proposed by Edwards, Pope and Schor (2000) is best considered in terms of the spatial dichotomy proposed by Hess and Wilcox (1994). At large disparities it is not possible to categorize performance based on exposure duration alone; 2) There is not a simple correspondence between Ogle's (1952) patent / qualitative perceptual categories and the 1st- /2nd- order dichotomy proposed by Hess and Wilcox (1994); 3) Quantitative depth is provided by both 1st- and 2nd- order mechanisms in the fused range, but only the 2nd- order signal is used when stimuli are diplopic; 3) The quantitative depth provided by a 2nd- order stimulus scales with envelope size; and 4) The monoptic depth phenomenon may be related to depth from diplopic stimuli, but for conditions tested here when both monoptic depth and 2nd- order stereopsis are available, the latter is used to encode depth percepts. The results reported here expand on earlier work on 1st- and 2nd- order stereopsis and address the issues in the methodologies used to study depth from large disparities. These results are consistent with the widely accepted filter-rectify-filter model of 2nd- order processing, and 1st- and 2nd- order stimuli are likely encoded by disparity-sensitive neurons via a two-stream model (see Wilson, Ferrera, and Yo (1992); Zhou and Baker (1993))
- âŠ