7,563 research outputs found
Recommended from our members
A First Order Analysis of Lighting, Shading, and Shadows
The shading in a scene depends on a combination of many factors---how the lighting varies spatially across a surface, how it varies along different directions, the geometric curvature and reflectance properties of objects, and the locations of soft shadows. In this paper, we conduct a complete first order or gradient analysis of lighting, shading and shadows, showing how each factor separately contributes to scene appearance, and when it is important. Gradients are well suited for analyzing the intricate combination of appearance effects, since each gradient term corresponds directly to variation in a specific factor. First, we show how the spatial {\em and} directional gradients of the light field change, as light interacts with curved objects. This extends the recent frequency analysis of Durand et al.\ to gradients, and has many advantages for operations, like bump-mapping, that are difficult to analyze in the Fourier domain. Second, we consider the individual terms responsible for shading gradients, such as lighting variation, convolution with the surface BRDF, and the object's curvature. This analysis indicates the relative importance of various terms, and shows precisely how they combine in shading. As one practical application, our theoretical framework can be used to adaptively sample images in high-gradient regions for efficient rendering. Third, we understand the effects of soft shadows, computing accurate visibility gradients. We generalize previous work to arbitrary curved occluders, and develop a local framework that is easy to integrate with conventional ray-tracing methods. Our visibility gradients can be directly used in practical gradient interpolation methods for efficient rendering
Is countershading camouflage robust to lighting change due to weather?
Countershading is a pattern of coloration thought to have evolved in order to implement camouflage. By adopting a pattern of coloration that makes the surface facing towards the sun darker and the surface facing away from the sun lighter, the overall amount of light reflected off an animal can be made more uniformly bright. Countershading could hence contribute to visual camouflage by increasing background matching or reducing cues to shape. However, the usefulness of countershading is constrained by a particular pattern delivering ‘optimal’ camouflage only for very specific lighting conditions. In this study, we test the robustness of countershading camouflage to lighting change due to weather, using human participants as a ‘generic’ predator. In a simulated three-dimensional environment, we constructed an array of simple leaf-shaped items and a single ellipsoidal target ‘prey’. We set these items in two light environments: strongly directional ‘sunny’ and more diffuse ‘cloudy’. The target object was given the optimal pattern of countershading for one of these two environment types or displayed a uniform pattern. By measuring detection time and accuracy, we explored whether and how target detection depended on the match between the pattern of coloration on the target object and scene lighting. Detection times were longest when the countershading was appropriate to the illumination; incorrectly camouflaged targets were detected with a similar pattern of speed and accuracy to uniformly coloured targets. We conclude that structural changes in light environment, such as caused by differences in weather, do change the effectiveness of countershading camouflage
Variational Uncalibrated Photometric Stereo under General Lighting
Photometric stereo (PS) techniques nowadays remain constrained to an ideal
laboratory setup where modeling and calibration of lighting is amenable. To
eliminate such restrictions, we propose an efficient principled variational
approach to uncalibrated PS under general illumination. To this end, the
Lambertian reflectance model is approximated through a spherical harmonic
expansion, which preserves the spatial invariance of the lighting. The joint
recovery of shape, reflectance and illumination is then formulated as a single
variational problem. There the shape estimation is carried out directly in
terms of the underlying perspective depth map, thus implicitly ensuring
integrability and bypassing the need for a subsequent normal integration. To
tackle the resulting nonconvex problem numerically, we undertake a two-phase
procedure to initialize a balloon-like perspective depth map, followed by a
"lagged" block coordinate descent scheme. The experiments validate efficiency
and robustness of this approach. Across a variety of evaluations, we are able
to reduce the mean angular error consistently by a factor of 2-3 compared to
the state-of-the-art.Comment: Haefner and Ye contributed equall
Photo-Realistic Scenes with Cast Shadows Show No Above/Below Search Asymmetries for Illumination Direction
Visual search is extended from the domain of polygonal figures presented on a uniform field to photo-realistic scenes containing target objects in dense, naturalistic backgrounds. The target in a trial is a computer-rendered rock protruding in depth from a "wall" of rocks of roughly similar size but different shapes. Subjects responded "present" when one rock appeared closer than the rest, owing to occlusions or cast shadows, and "absent" when all rocks appeared to be at the same depth. Results showed that cast shadows can significantly decrease reaction times compared to scenes with no cast shadows, in which the target was revealed only by occlusions of rocks behind it. A control experiment showed that cast shadows can be utilized even for displays involving rocks of several achromatic surface colors (dark through light), in which the shadow cast by the target rock was not the darkest region in the scene. Finally, in contrast with reports of experiments by others involving polygonal figures, we found no evidence for an effect of illumination direction (above vs. below) on search times.Office of Naval Research (N00014-94-1-0597, N00014-95-1-0409
Consensus Message Passing for Layered Graphical Models
Generative models provide a powerful framework for probabilistic reasoning.
However, in many domains their use has been hampered by the practical
difficulties of inference. This is particularly the case in computer vision,
where models of the imaging process tend to be large, loopy and layered. For
this reason bottom-up conditional models have traditionally dominated in such
domains. We find that widely-used, general-purpose message passing inference
algorithms such as Expectation Propagation (EP) and Variational Message Passing
(VMP) fail on the simplest of vision models. With these models in mind, we
introduce a modification to message passing that learns to exploit their
layered structure by passing 'consensus' messages that guide inference towards
good solutions. Experiments on a variety of problems show that the proposed
technique leads to significantly more accurate inference results, not only when
compared to standard EP and VMP, but also when compared to competitive
bottom-up conditional models.Comment: Appearing in Proceedings of the 18th International Conference on
Artificial Intelligence and Statistics (AISTATS) 201
- …