4,966 research outputs found
DeLight-Net: Decomposing Reflectance Maps into Specular Materials and Natural Illumination
In this paper we are extracting surface reflectance and natural environmental
illumination from a reflectance map, i.e. from a single 2D image of a sphere of
one material under one illumination. This is a notoriously difficult problem,
yet key to various re-rendering applications. With the recent advances in
estimating reflectance maps from 2D images their further decomposition has
become increasingly relevant.
To this end, we propose a Convolutional Neural Network (CNN) architecture to
reconstruct both material parameters (i.e. Phong) as well as illumination (i.e.
high-resolution spherical illumination maps), that is solely trained on
synthetic data. We demonstrate that decomposition of synthetic as well as real
photographs of reflectance maps, both in High Dynamic Range (HDR), and, for the
first time, on Low Dynamic Range (LDR) as well. Results are compared to
previous approaches quantitatively as well as qualitatively in terms of
re-renderings where illumination, material, view or shape are changed.Comment: Stamatios Georgoulis and Konstantinos Rematas contributed equally to
this wor
A framework for digital sunken relief generation based on 3D geometric models
Sunken relief is a special art form of sculpture whereby the depicted shapes are sunk into a given surface. This is traditionally created by laboriously carving materials such as stone. Sunken reliefs often utilize the engraved lines or strokes to strengthen the impressions of a 3D presence and to highlight the features which otherwise are unrevealed. In other types of reliefs, smooth surfaces and their shadows convey such information in a coherent manner. Existing methods for relief generation are focused on forming a smooth surface with a shallow depth which provides the presence of 3D figures. Such methods unfortunately do not help the art form of sunken reliefs as they omit the presence of feature lines. We propose a framework to produce sunken reliefs from a known 3D geometry, which transforms the 3D objects into three layers of input to incorporate the contour lines seamlessly with the smooth surfaces. The three input layers take the advantages of the geometric information and the visual cues to assist the relief generation. This framework alters existing techniques in line drawings and relief generation, and then combines them organically for this particular purpose
A new technique based on mini-UAS for estimating water and bottom radiance contributions in optically shallow waters
The mapping of nearshore bathymetry based on spaceborne radiometers is commonly used for QC ocean colour products in littoral waters. However, the accuracy of these estimates is relatively poor with respect to those derived from Lidar systems due in part to the large uncertainties of bottom depth retrievals caused by changes on bottom reflectivity. Here, we present a method based on mini unmanned aerial vehicles (UAS) images for discriminating bottom-reflected and water radiance components by taking advantage of shadows created by different
structures sitting on the bottom boundary. Aerial surveys were done with a drone Draganfly X4P during October 1 2013 in optically shallow waters of the Saint Lawrence Estuary, and during low tide. Colour images with a spatial resolution
of 3 mm were obtained with an Olympus EPM-1 camera at 10 m height. Preliminary results showed an increase of the relative difference between bright and dark pixels (dP) toward the red wavelengths of the camera's receiver. This is
suggesting that dP values can be potentially used as a quantitative proxy of bottom reflectivity after removing artefacts related to Fresnel reflection and bottom adjacency effects.Peer ReviewedPostprint (published version
Dynamic Illumination for Augmented Reality with Real-Time Interaction
Current augmented and mixed reality systems suffer a lack of correct illumination modeling where the virtual objects render the same lighting condition as the real environment. While we are experiencing astonishing results from the entertainment industry in multiple media forms, the procedure is mostly accomplished offline. The illumination information extracted from the physical scene is used to interactively render the virtual objects which results in a more realistic output in real-time. In this paper, we present a method that detects the physical illumination with dynamic scene, then uses the extracted illumination to render the virtual objects added to the scene. The method has three steps that are assumed to be working concurrently in real-time. The first is the estimation of the direct illumination (incident light) from the physical scene using computer vision techniques through a 360° live-feed camera connected to AR device. The second is the simulation of indirect illumination (reflected light) from the real-world surfaces to virtual objects rendering using region capture of 2D texture from the AR camera view. The third is defining the virtual objects with proper lighting and shadowing characteristics using shader language through multiple passes. Finally, we tested our work with multiple lighting conditions to evaluate the accuracy of results based on the shadow falling from the virtual objects which should be consistent with the shadow falling from the real objects with a reduced performance cost
Basic research planning in mathematical pattern recognition and image analysis
Fundamental problems encountered while attempting to develop automated techniques for applications of remote sensing are discussed under the following categories: (1) geometric and radiometric preprocessing; (2) spatial, spectral, temporal, syntactic, and ancillary digital image representation; (3) image partitioning, proportion estimation, and error models in object scene interference; (4) parallel processing and image data structures; and (5) continuing studies in polarization; computer architectures and parallel processing; and the applicability of "expert systems" to interactive analysis
Natural images from the birthplace of the human eye
Here we introduce a database of calibrated natural images publicly available
through an easy-to-use web interface. Using a Nikon D70 digital SLR camera, we
acquired about 5000 six-megapixel images of Okavango Delta of Botswana, a
tropical savanna habitat similar to where the human eye is thought to have
evolved. Some sequences of images were captured unsystematically while
following a baboon troop, while others were designed to vary a single parameter
such as aperture, object distance, time of day or position on the horizon.
Images are available in the raw RGB format and in grayscale. Images are also
available in units relevant to the physiology of human cone photoreceptors,
where pixel values represent the expected number of photoisomerizations per
second for cones sensitive to long (L), medium (M) and short (S) wavelengths.
This database is distributed under a Creative Commons Attribution-Noncommercial
Unported license to facilitate research in computer vision, psychophysics of
perception, and visual neuroscience.Comment: Submitted to PLoS ON
Joint Material and Illumination Estimation from Photo Sets in the Wild
Faithful manipulation of shape, material, and illumination in 2D Internet
images would greatly benefit from a reliable factorization of appearance into
material (i.e., diffuse and specular) and illumination (i.e., environment
maps). On the one hand, current methods that produce very high fidelity
results, typically require controlled settings, expensive devices, or
significant manual effort. To the other hand, methods that are automatic and
work on 'in the wild' Internet images, often extract only low-frequency
lighting or diffuse materials. In this work, we propose to make use of a set of
photographs in order to jointly estimate the non-diffuse materials and sharp
lighting in an uncontrolled setting. Our key observation is that seeing
multiple instances of the same material under different illumination (i.e.,
environment), and different materials under the same illumination provide
valuable constraints that can be exploited to yield a high-quality solution
(i.e., specular materials and environment illumination) for all the observed
materials and environments. Similar constraints also arise when observing
multiple materials in a single environment, or a single material across
multiple environments. The core of this approach is an optimization procedure
that uses two neural networks that are trained on synthetic images to predict
good gradients in parametric space given observation of reflected light. We
evaluate our method on a range of synthetic and real examples to generate
high-quality estimates, qualitatively compare our results against
state-of-the-art alternatives via a user study, and demonstrate
photo-consistent image manipulation that is otherwise very challenging to
achieve
- …