187 research outputs found
Toward High-Precision Measures of Large-Scale Structure
I review some results of estimation of the power spectrum of density
fluctuations from galaxy redshift surveys and discuss advances that may be
possible with the Sloan Digital Sky Survey. I then examine the realities of
power spectrum estimation in the presence of Galactic extinction, photometric
errors, galaxy evolution, clustering evolution, and uncertainty about the
background cosmology.Comment: 24 pages, including 11 postscript figures. Uses crckapb.sty (included
in submission). To appear in ``Ringberg Workshop on Large-Scale Structure,''
ed D. Hamilton (Kluwer, Amsterdam), p. 39
Optimal measurement of visual motion across spatial and temporal scales
Sensory systems use limited resources to mediate the perception of a great
variety of objects and events. Here a normative framework is presented for
exploring how the problem of efficient allocation of resources can be solved in
visual perception. Starting with a basic property of every measurement,
captured by Gabor's uncertainty relation about the location and frequency
content of signals, prescriptions are developed for optimal allocation of
sensors for reliable perception of visual motion. This study reveals that a
large-scale characteristic of human vision (the spatiotemporal contrast
sensitivity function) is similar to the optimal prescription, and it suggests
that some previously puzzling phenomena of visual sensitivity, adaptation, and
perceptual organization have simple principled explanations.Comment: 28 pages, 10 figures, 2 appendices; in press in Favorskaya MN and
Jain LC (Eds), Computer Vision in Advanced Control Systems using Conventional
and Intelligent Paradigms, Intelligent Systems Reference Library,
Springer-Verlag, Berli
Past Achievements and Future Challenges in 3D Photonic Metamaterials
Photonic metamaterials are man-made structures composed of tailored micro- or
nanostructured metallo-dielectric sub-wavelength building blocks that are
densely packed into an effective material. This deceptively simple, yet
powerful, truly revolutionary concept allows for achieving novel, unusual, and
sometimes even unheard-of optical properties, such as magnetism at optical
frequencies, negative refractive indices, large positive refractive indices,
zero reflection via impedance matching, perfect absorption, giant circular
dichroism, or enhanced nonlinear optical properties. Possible applications of
metamaterials comprise ultrahigh-resolution imaging systems, compact
polarization optics, and cloaking devices. This review describes the
experimental progress recently made fabricating three-dimensional metamaterial
structures and discusses some remaining future challenges
Combining Path Integration and Remembered Landmarks When Navigating without Vision
This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information.National Institutes of Health (U.S.) (Grant T32 HD007151)National Institutes of Health (U.S.) (Grant T32 EY07133)National Institutes of Health (U.S.) (Grant F32EY019622)National Institutes of Health (U.S.) (Grant EY02857)National Institutes of Health (U.S.) (Grant EY017835-01)National Institutes of Health (U.S.) (Grant EY015616-03)United States. Department of Education (H133A011903
Recommended from our members
Modelling human visual navigation using multi-view scene reconstruction
It is often assumed that humans generate a 3D reconstruction of the environment, either in egocentric or world-based coordinates, but the steps involved are unknown. Here, we propose two reconstruction-based models, evaluated using data from two tasks in immersive virtual reality. We model the observer’s prediction of landmark location based on standard photogrammetric methods and then combine location predictions to compute likelihood maps of navigation behaviour. In one model, each scene point is treated independently in the reconstruction; in the other, the pertinent variable is the spatial relationship between pairs of points. Participants viewed a simple environment from one location, were transported (virtually) to another part of the scene and were asked to navigate back. Error distributions varied substantially with changes in scene layout; we compared these directly with the likelihood maps to quantify the success of the models. We also measured error distributions when participants manipulated the location of a landmark to match the preceding interval, providing a direct test of the landmark-location stage of the navigation models. Models such as this, which start with scenes and end with a probabilistic prediction of behaviour, are likely to be increasingly useful for understanding 3D vision
A nonlinear updating algorithm captures suboptimal inference in the presence of signal-dependent noise
Bayesian models have advanced the idea that humans combine prior beliefs and sensory observations to optimize behavior. How the brain implements Bayes-optimal inference, however, remains poorly understood. Simple behavioral tasks suggest that the brain can flexibly represent probability distributions. An alternative view is that the brain relies on simple algorithms that can implement Bayes-optimal behavior only when the computational demands are low. To distinguish between these alternatives, we devised a task in which Bayes-optimal performance could not be matched by simple algorithms. We asked subjects to estimate and reproduce a time interval by combining prior information with one or two sequential measurements. In the domain of time, measurement noise increases with duration. This property takes the integration of multiple measurements beyond the reach of simple algorithms. We found that subjects were able to update their estimates using the second measurement but their performance was suboptimal, suggesting that they were unable to update full probability distributions. Instead, subjects’ behavior was consistent with an algorithm that predicts upcoming sensory signals, and applies a nonlinear function to errors in prediction to update estimates. These results indicate that the inference strategies employed by humans may deviate from Bayes-optimal integration when the computational demands are high
First- and second-order contributions to depth perception in anti-correlated random dot stereograms.
The binocular energy model of neural responses predicts that depth from binocular disparity might be perceived in the reversed direction when the contrast of dots presented to one eye is reversed. While reversed-depth has been found using anti-correlated random-dot stereograms (ACRDS) the findings are inconsistent across studies. The mixed findings may be accounted for by the presence of a gap between the target and surround, or as a result of overlap of dots around the vertical edges of the stimuli. To test this, we assessed whether (1) the gap size (0, 19.2 or 38.4 arc min) (2) the correlation of dots or (3) the border orientation (circular target, or horizontal or vertical edge) affected the perception of depth. Reversed-depth from ACRDS (circular no-gap condition) was seen by a minority of participants, but this effect reduced as the gap size increased. Depth was mostly perceived in the correct direction for ACRDS edge stimuli, with the effect increasing with the gap size. The inconsistency across conditions can be accounted for by the relative reliability of first- and second-order depth detection mechanisms, and the coarse spatial resolution of the latter
Recommended from our members
Gaze-grasp coordination in obstacle avoidance: differences between binocular and monocular viewing
Most adults can skillfully avoid potential obstacles when acting in everyday cluttered scenes. We examined how gaze and hand movements are normally coordinated for obstacle avoidance and whether these are altered when binocular depth information is unavailable. Visual fixations and hand movement kinematics were simultaneously recorded, while 13 right-handed subjects reached-to-precision grasp a cylindrical household object presented alone or with a potential obstacle (wine glass) located to its left (thumb's grasp side), right or just behind it (both closer to the finger's grasp side) using binocular or monocular vision. Gaze and hand movement strategies differed significantly by view and obstacle location. With binocular vision, initial fixations were near the target's centre of mass (COM) around the time of hand movement onset, but usually shifted to end just above the thumb's grasp site at initial object contact, this mainly being made by the thumb, consistent with selecting this digit for guiding the grasp. This strategy was associated with faster binocular hand movements and improved end-point grip precision across all trials than with monocular viewing, during which subjects usually continued to fixate the target closer to its COM despite a similar prevalence of thumb-first contacts. While subjects looked directly at the obstacle at each location on a minority of trials and their overall fixations on the target were somewhat biased towards the grasp side nearest to it, these gaze behaviours were particularly marked on monocular vision-obstacle behind trials which also commonly ended in finger-first contact. Subjects avoided colliding with the wine glass under both views when on the right (finger side) of the workspace by producing slower and straighter reaches, with this and the behind obstacle location also resulting in 'safer' (i.e. narrower) peak grip apertures and longer deceleration times than when the goal object was alone or the obstacle was on its thumb side. But monocular reach paths were more variable and deceleration times were selectively prolonged on finger-side and behind obstacle trials, with this latter condition further resulting in selectively increased grip closure times and corrections. Binocular vision thus provided added advantages for collision avoidance, known to require intact dorsal cortical stream processing mechanisms, particularly when the target of the grasp and potential obstacle to it were fairly closely separated in depth. Different accounts of the altered monocular gaze behaviour converged on the conclusion that additional perceptual and/or attentional resources are likely engaged compared to when continuous binocular depth information is available. Implications for people lacking binocular stereopsis are briefly considered
The integration of occlusion and disparity information for judging depth in autism spectrum disorder
In autism spectrum disorder (ASD), atypical integration of visual depth cues may be due to flattened perceptual priors or selective fusion. The current study attempts to disentangle these explanations by psychophysically assessing within-modality integration of ordinal (occlusion) and metric (disparity) depth cues while accounting for sensitivity to stereoscopic information. Participants included 22 individuals with ASD and 23 typically developing matched controls. Although adults with ASD were found to have significantly poorer stereoacuity, they were still able to automatically integrate conflicting depth cues, lending support to the idea that priors are intact in ASD. However, dissimilarities in response speed variability between the ASD and TD groups suggests that there may be differences in the perceptual decision-making aspect of the task
Perceived Surface Slant Is Systematically Biased in the Actively-Generated Optic Flow
Humans make systematic errors in the 3D interpretation of the optic flow in both passive and active vision. These systematic distortions can be predicted by a biologically-inspired model which disregards self-motion information resulting from head movements (Caudek, Fantoni, & Domini 2011). Here, we tested two predictions of this model: (1) A plane that is stationary in an earth-fixed reference frame will be perceived as changing its slant if the movement of the observer's head causes a variation of the optic flow; (2) a surface that rotates in an earth-fixed reference frame will be perceived to be stationary, if the surface rotation is appropriately yoked to the head movement so as to generate a variation of the surface slant but not of the optic flow. Both predictions were corroborated by two experiments in which observers judged the perceived slant of a random-dot planar surface during egomotion. We found qualitatively similar biases for monocular and binocular viewing of the simulated surfaces, although, in principle, the simultaneous presence of disparity and motion cues allows for a veridical recovery of surface slant
- …
