23,579 research outputs found
The stroboscopic human vision
When the frequency of seeing light from a pair of point flashes is beyond the probability summation of the separate flashes, the surplus is due to the successful interaction of subliminal responses from the different flashes. Experiments with various distances and various periods of the pair show thet successful interaction occurs when in each of two successive time-quanta of 0.04 seconds and in each of two adjacent distinct receptor groups at least one subliminal receptor response occurs. An autonomous source produces the time-quanta. It serves the time-processing of the central nervous system and of the motor system. Posdsibly, action potentials from the purkinje cells of the myocardium play a role. Hyper acuity in direction and in depth, flicker fusion, perceptual rivalry and ather phenomena follow from the quantized spatiotemporal signal processing
An inverse oblique effect in human vision
AbstractIn the classic oblique effect contrast detection thresholds, orientation discrimination thresholds, and other psychophysical measures are found to be smallest for vertical or horizontal stimuli and significantly higher for stimuli near the ±45° obliques. Here we report a novel inverse oblique effect in which thresholds for detecting translational structure in random dot patterns [Glass, L. (1969). Moiré effect from random dots. Nature, 223, 578–580] are lowest for obliquely oriented structure and higher for either horizontal or vertical structure. Area summation experiments provide evidence that this results from larger pooling areas for oblique orientations in these patterns. The results can be explained quantitatively by a model for complex cells in which the final filtering stage in a filter–rectify–filter sequence is of significantly larger area for oblique orientations
Exploring Human Vision Driven Features for Pedestrian Detection
Motivated by the center-surround mechanism in the human visual attention
system, we propose to use average contrast maps for the challenge of pedestrian
detection in street scenes due to the observation that pedestrians indeed
exhibit discriminative contrast texture. Our main contributions are first to
design a local, statistical multi-channel descriptorin order to incorporate
both color and gradient information. Second, we introduce a multi-direction and
multi-scale contrast scheme based on grid-cells in order to integrate
expressive local variations. Contributing to the issue of selecting most
discriminative features for assessing and classification, we perform extensive
comparisons w.r.t. statistical descriptors, contrast measurements, and scale
structures. This way, we obtain reasonable results under various
configurations. Empirical findings from applying our optimized detector on the
INRIA and Caltech pedestrian datasets show that our features yield
state-of-the-art performance in pedestrian detection.Comment: Accepted for publication in IEEE Transactions on Circuits and Systems
for Video Technology (TCSVT
Computational models of human vision with applications
The research program supported by this grant was initiated in l977 by the Joint Institute for Aeronautics and Acoustics of the Department of Aeronautics and Astronautics at Stanford University. The purpose of the research was to study human performance with the goal of improving the design of flight instrumentation. By mutual agreement between the scientists at NASA-Ames and Stanford, all research activities in this area were consolidated into a single funding mechanism, NCC 2-307 (Center of Excellence Grant, 7/1/84 - present). This is the final report on this research grant
Recommended from our members
View-based approaches to spatial representation in human vision
In an immersive virtual environment, observers fail to notice the expansion of a room around them and consequently make gross errors when comparing the size of objects. This result is difficult to explain if the visual system continuously generates a 3-D model of the scene based on known baseline information from interocular separation or proprioception as the observer walks. An alternative is that observers use view-based methods to guide their actions and to represent the spatial layout of the scene. In this case, they may have an expectation of the images they will receive but be insensitive to the rate at which images arrive as they walk. We describe the way in which the eye movement strategy of animals simplifies motion processing if their goal is to move towards a desired image and discuss dorsal and ventral stream processing of moving images in that context. Although many questions about view-based approaches to scene representation remain unanswered, the solutions are likely to be highly relevant to understanding biological 3-D vision
- …