50 research outputs found
Analysis and approximation of some Shape-from-Shading models for non-Lambertian surfaces
The reconstruction of a 3D object or a scene is a classical inverse problem
in Computer Vision. In the case of a single image this is called the
Shape-from-Shading (SfS) problem and it is known to be ill-posed even in a
simplified version like the vertical light source case. A huge number of works
deals with the orthographic SfS problem based on the Lambertian reflectance
model, the most common and simplest model which leads to an eikonal type
equation when the light source is on the vertical axis. In this paper we want
to study non-Lambertian models since they are more realistic and suitable
whenever one has to deal with different kind of surfaces, rough or specular. We
will present a unified mathematical formulation of some popular orthographic
non-Lambertian models, considering vertical and oblique light directions as
well as different viewer positions. These models lead to more complex
stationary nonlinear partial differential equations of Hamilton-Jacobi type
which can be regarded as the generalization of the classical eikonal equation
corresponding to the Lambertian case. However, all the equations corresponding
to the models considered here (Oren-Nayar and Phong) have a similar structure
so we can look for weak solutions to this class in the viscosity solution
framework. Via this unified approach, we are able to develop a semi-Lagrangian
approximation scheme for the Oren-Nayar and the Phong model and to prove a
general convergence result. Numerical simulations on synthetic and real images
will illustrate the effectiveness of this approach and the main features of the
scheme, also comparing the results with previous results in the literature.Comment: Accepted version to Journal of Mathematical Imaging and Vision, 57
page
A unified approach to the well-posedness of some non-Lambertian models in Shape-from-Shading theory
In this paper we show that the introduction of an attenuation factor in the
%image irradiance brightness equations relative to various perspective Shape
from Shading models allows to make the corresponding differential problems
well-posed. We propose a unified approach based on the theory of viscosity
solution and we show that the brightness equations with the attenuation term
admit a unique viscosity solution. We also discuss in detail the possible
boundary conditions that we can use for the Hamilton-Jacobi equations
associated to these models
Stereoscopic viewing, roughness and gloss perception
This thesis presents a novel investigation into the effect stereoscopic vision has upon the
strength of perceived gloss on rough surfaces. We demonstrate that in certain cases
disparity is necessary for accurate judgements of gloss strength.
We first detail the process we used to create a two-level taxonomy of property terms,
which helped to inform the early direction of this work, before presenting the eleven
words which we found categorised the property space. This shaped careful examination
of the relevant literature, leading us to conclude that most studies into roughness, gloss,
and stereoscopic vision have been performed with unrealistic surfaces and physically
inaccurate lighting models.
To improve on the stimuli used in these earlier studies, advanced offline rendering
techniques were employed to create images of complex, naturalistic, and realistically
glossy 1/fβ noise surfaces. These images were rendered using multi-bounce path tracing
to account for interreflections and soft shadows, with a reflectance model which
observed all common light phenomena. Using these images in a series of
psychophysical experiments, we first show that random phase spectra can alter the
strength of perceived gloss. These results are presented alongside pairs of the surfaces
tested which have similar levels of perceptual gloss. These surface pairs are then used to
conclude that naïve observers consistently underestimate how glossy a surface is
without the correct surface and highlight disparity, but only on the rougher surfaces
presented
Recommended from our members
Detailed and Practical 3D Reconstruction with Advanced Photometric Stereo Modelling
Object 3D reconstruction has always been one of the main objectives of computer vision. After many decades of research, most techniques are still unsuccessful at recovering high resolution surfaces, especially for objects with limited surface texture. Moreover, most shiny materials are particularly hard to reconstruct.
Photometric Stereo (PS), which operates by capturing multiple images under changing illumination has traditionally been one of the most successful techniques at recovering a large amount of surface details, by exploiting the relationship between shading and local shape. However, using PS has been highly impractical because most approaches are only applicable in a very controlled lab setting and limited to objects experiencing diffuse reflection.
Nevertheless, recent advances in differential modelling have made complicated Photometric Stereo models possible and variational optimisations for these kinds of models show remarkable resilience to real world imperfections such as non-Gaussian noise and other outliers. Thus, a highly accurate, photometric-based reconstruction system is now possible.
The contribution of this thesis is threefold. First of all, the Photometric Stereo model is extended in order to be able to deal with arbitrary ambient lighting. This is a step towards acquisition in a non-fully controlled lab setting. Secondly, the need for a priori knowledge of the light source brightness and attenuation characteristics is relaxed as an alternating optimisation procedure is proposed which is able to estimate these parameters. This extension allows for quick acquisition with inexpensive LEDs that exhibit unpredictable illumination characteristics (flickering etc). Finally, a volumetric parameterisation is proposed which allows one to tackle the multi-view Photometric Stereo problem in a similar manner, in a simple unified differential model. This final extension allows for complete object reconstruction merging information from multiple images taken from multiple viewpoints and variable illumination.
The theoretical work in this thesis is experimentally evaluated in a number of challenging real world experiments, with data captured by custom-made hardware. In addition, the applicability of the generality of the proposed models is demonstrated by presenting a differential model for the shape of polarisation problem, which leads to a unified optimisation problem, fusing information from both methods. This allows for the acquisition of geometrical information about objects such as semi-transparent glass, hitherto hard to deal with
Modeling of SHF/EHF Radio-Wave Scattering for Curved Surfaces with Voxel Cone Tracing
Efficient and accurate radio propagation modeling is essential for optimization of both radio sensing and communication systems. However, highly accurate full-wave methods remain inefficient at high frequencies, as unit of computation (typically, a voxel) has to be made much smaller than the wavelength. On the other hand, ray-based approaches offer the desired speed, but the surface element (typically, a triangle) must be made much larger than the wavelength, making it difficult to represent complex curved surfaces of common objects such as cars or unmanned aerial vehicles. As a result, for SHF/EHF bands, it is challenging to select a method that is both fast and capable of capturing curved surfaces correctly. To address this matter, we present a method that offers a reasonable trade-off between speed and accuracy for radio propagation modeling in the bands of interest. Specifically, we combine efficient voxel scene representation targeting a cone tracing algorithm with a statistical scattering model. To confirm the validity of our approach, we report the dependence of reflected power on the distance for basic primitives such as cone and sphere, for which closed-form radar cross-section solutions are known.acceptedVersionPeer reviewe
Image based surface reflectance remapping for consistent and tool independent material appearence
Physically-based rendering in Computer Graphics requires the knowledge of material properties other than 3D shapes, textures and colors, in order to solve the rendering equation. A number of material models have been developed, since no model is currently able to reproduce the full range of available materials. Although only few material models have been widely adopted in current rendering systems, the lack of standardisation causes several issues in the 3D modelling
workflow, leading to a heavy tool dependency of material appearance. In industry, final decisions about products are often based on a virtual prototype, a crucial step for the production pipeline, usually developed by a collaborations among several
departments, which exchange data. Unfortunately, exchanged data often tends to differ from the original, when imported into a different application. As a result, delivering consistent visual results requires time, labour and computational cost.
This thesis begins with an examination of the current state of the art in material appearance representation and capture, in order to identify a suitable strategy to tackle material appearance consistency. Automatic solutions to this problem are suggested in this work, accounting for the constraints of real-world scenarios, where the only available information is a reference rendering and the renderer used to obtain it, with no access to the implementation of the shaders. In particular, two image-based frameworks are proposed, working under these constraints.
The first one, validated by means of perceptual studies, is aimed to the remapping of BRDF parameters and useful when the parameters used for the reference rendering are available. The second one provides consistent material appearance across different renderers, even when the parameters used for the reference are unknown. It allows the selection of an arbitrary reference rendering tool, and manipulates the output of other renderers in order to be consistent with the reference
Cross-Spectral Face Recognition Between Near-Infrared and Visible Light Modalities.
In this thesis, improvement of face recognition performance with the use of images from the visible (VIS) and near-infrared (NIR) spectrum is attempted. Face recognition systems can be adversely affected by scenarios which encounter a significant amount of illumination variation across images of the same subject. Cross-spectral face recognition systems using images collected across the VIS and NIR spectrum can counter the ill-effects of illumination variation by standardising both sets of images. A novel preprocessing technique is proposed, which attempts the transformation of faces across both modalities to a feature space with enhanced correlation. Direct matching across the modalities is not possible due to the inherent spectral differences between NIR and VIS face images. Compared to a VIS light source, NIR radiation has a greater penetrative depth when incident on human skin. This fact, in addition to the greater number of scattering interactions within the skin by rays from the NIR spectrum can alter the morphology of the human face enough to disable a direct match with the corresponding VIS face. Several ways to bridge the gap between NIR-VIS faces have been proposed previously. Mostly of a data-driven approach, these techniques include standardised photometric normalisation techniques and subspace projections. A generative approach driven by a true physical model has not been investigated till now. In this thesis, it is proposed that a large proportion of the scattering interactions present in the NIR spectrum can be accounted for using a model for subsurface scattering. A novel subsurface scattering inversion (SSI) algorithm is developed that implements an inversion approach based on translucent surface rendering by the computer graphics field, whereby the reversal of the first order effects of subsurface scattering is attempted. The SSI algorithm is then evaluated against several preprocessing techniques, and using various permutations of feature extraction and subspace projection algorithms. The results of this evaluation show an improvement in cross spectral face recognition performance using SSI over existing Retinex-based approaches. The top performing combination of an existing photometric normalisation technique, Sequential Chain, is seen to be the best performing with a Rank 1 recognition rate of 92. 5%. In addition, the improvement in performance using non-linear projection models shows an element of non-linearity exists in the relationship between NIR and VIS