516 research outputs found
Surface reflectance recognition and real-world illumination statistics
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2003.Includes bibliographical references (p. 141-150).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Humans distinguish materials such as metal, plastic, and paper effortlessly at a glance. Traditional computer vision systems cannot solve this problem at all. Recognizing surface reflectance properties from a single photograph is difficult because the observed image depends heavily on the amount of light incident from every direction. A mirrored sphere, for example, produces a different image in every environment. To make matters worse, two surfaces with different reflectance properties could produce identical images. The mirrored sphere simply reflects its surroundings, so in the right artificial setting, it could mimic the appearance of a matte ping-pong ball. Yet, humans possess an intuitive sense of what materials typically "look like" in the real world. This thesis develops computational algorithms with a similar ability to recognize reflectance properties from photographs under unknown, real-world illumination conditions. Real-world illumination is complex, with light typically incident on a surface from every direction. We find, however, that real-world illumination patterns are not arbitrary. They exhibit highly predictable spatial structure, which we describe largely in the wavelet domain. Although they differ in several respects from the typical photographs, illumination patterns share much of the regularity described in the natural image statistics literature. These properties of real-world illumination lead to predictable image statistics for a surface with given reflectance properties. We construct a system that classifies a surface according to its reflectance from a single photograph under unknown illumination. Our algorithm learns relationships between surface reflectance and certain statistics computed from the observed image.(cont.) Like the human visual system, we solve the otherwise underconstrained inverse problem of reflectance estimation by taking advantage of the statistical regularity of illumination. For surfaces with homogeneous reflectance properties and known geometry, our system rivals human performance.by Ron O. Dror.Ph.D
Surface Reflectance Recognition and Real-World Illumination Statistics
Humans distinguish materials such as metal, plastic, and paper effortlessly at a glance. Traditional computer vision systems cannot solve this problem at all. Recognizing surface reflectance properties from a single photograph is difficult because the observed image depends heavily on the amount of light incident from every direction. A mirrored sphere, for example, produces a different image in every environment. To make matters worse, two surfaces with different reflectance properties could produce identical images. The mirrored sphere simply reflects its surroundings, so in the right artificial setting, it could mimic the appearance of a matte ping-pong ball. Yet, humans possess an intuitive sense of what materials typically "look like" in the real world. This thesis develops computational algorithms with a similar ability to recognize reflectance properties from photographs under unknown, real-world illumination conditions. Real-world illumination is complex, with light typically incident on a surface from every direction. We find, however, that real-world illumination patterns are not arbitrary. They exhibit highly predictable spatial structure, which we describe largely in the wavelet domain. Although they differ in several respects from the typical photographs, illumination patterns share much of the regularity described in the natural image statistics literature. These properties of real-world illumination lead to predictable image statistics for a surface with given reflectance properties. We construct a system that classifies a surface according to its reflectance from a single photograph under unknown illuminination. Our algorithm learns relationships between surface reflectance and certain statistics computed from the observed image. Like the human visual system, we solve the otherwise underconstrained inverse problem of reflectance estimation by taking advantage of the statistical regularity of illumination. For surfaces with homogeneous reflectance properties and known geometry, our system rivals human performance
Computational Model for Human 3D Shape Perception From a Single Specular Image
In natural conditions the human visual system can estimate the 3D shape of specular objects even from a single image. Although previous studies suggested that the orientation field plays a key role for 3D shape perception from specular reflections, its computational plausibility, and possible mechanisms have not been investigated. In this study, to complement the orientation field information, we first add prior knowledge that objects are illuminated from above and utilize the vertical polarity of the intensity gradient. Then we construct an algorithm that incorporates these two image cues to estimate 3D shapes from a single specular image. We evaluated the algorithm with glossy and mirrored surfaces and found that 3D shapes can be recovered with a high correlation coefficient of around 0.8 with true surface shapes. Moreover, under a specific condition, the algorithm's errors resembled those made by human observers. These findings show that the combination of the orientation field and the vertical polarity of the intensity gradient is computationally sufficient and probably reproduces essential representations used in human shape perception from specular reflections
On-site surface reflectometry
The rapid development of Augmented Reality (AR) and Virtual Reality (VR)
applications over the past years has created the need to quickly and accurately scan
the real world to populate immersive, realistic virtual environments for the end
user to enjoy. While geometry processing has already gone a long way towards that
goal, with self-contained solutions commercially available for on-site acquisition of
large scale 3D models, capturing the appearance of the materials that compose
those models remains an open problem in general uncontrolled environments.
The appearance of a material is indeed a complex function of its geometry,
intrinsic physical properties and furthermore depends on the illumination conditions
in which it is observed, thus traditionally limiting the scope of reflectometry
to highly controlled lighting conditions in a laboratory setup. With the rapid development
of digital photography, especially on mobile devices, a new trend in the
appearance modelling community has emerged, that investigates novel acquisition
methods and algorithms to relax the hard constraints imposed by laboratory-like
setups, for easy use by digital artists. While arguably not as accurate, we demonstrate
the ability of such self-contained methods to enable quick and easy solutions
for on-site reflectometry, able to produce compelling, photo-realistic imagery.
In particular, this dissertation investigates novel methods for on-site acquisition
of surface reflectance based on off-the-shelf, commodity hardware. We successfully
demonstrate how a mobile device can be utilised to capture high quality
reflectance maps of spatially-varying planar surfaces in general indoor lighting
conditions. We further present a novel methodology for the acquisition of highly
detailed reflectance maps of permanent on-site, outdoor surfaces by exploiting
polarisation from reflection under natural illumination.
We demonstrate the versatility of the presented approaches by scanning various
surfaces from the real world and show good qualitative and quantitative agreement
with existing methods for appearance acquisition employing controlled or
semi-controlled illumination setups.Open Acces
Geometry and Photometry in 3D Visual Recognition
The report addresses the problem of visual recognition under two sources of variability: geometric and photometric. The geometric deals with the relation between 3D objects and their views under orthographic and perspective projection. The photometric deals with the relation between 3D matte objects and their images under changing illumination conditions. Taken together, an alignment-based method is presented for recognizing objects viewed from arbitrary viewing positions and illuminated by arbitrary settings of light sources
- …