29,202 research outputs found

    Shape reconstruction from shading using linear approximation

    Get PDF
    Shape from shading (SFS) deals with the recovery of 3D shape from a single monocular image. This problem was formally introduced by Horn in the early 1970s. Since then it has received considerable attention, and several efforts have been made to improve the shape recovery. In this thesis, we present a fast SFS algorithm, which is a purely local method and is highly parallelizable. In our approach, we first use the discrete approximations for surface gradients, p and q, using finite differences, then linearize the reflectance function in depth, Z ( x , y), instead of p and q. This method is simple and efficient, and yields better results for images with central illumination or low-angle illumination. Furthermore, our method is more general, and can be applied to either Lambertian surfaces or specular surfaces. The algorithm has been tested on several synthetic and real images of both Lambertian and specular surfaces, and good results have been obtained. However, our method assumes that the input image contains only single object with uniform albedo values, which is commonly assumed in most SFS methods. Our algorithm performs poorly on images with nonuniform albedo values and produces incorrect shape for images containing objects with scale ambiguity, because those images violate the basic assumptions made by our SFS method. Therefore, we extended our method for images with nonuniform albedo values. We first estimate the albedo values for each pixel, and segment the scene into regions with uniform albedo values. Then we adjust the intensity value for each pixel by dividing the corresponding albedo value before applying our linear shape from shading method. This way our modified method is able to deal with nonuniform albedo values. When multiple objects differing only in scale are present in a scene, there may be points with the same surface orientation but different depth values. No existing SFS methods can solve this kind of ambiguity directly. We also present a new approach to deal with images containing multiple objects with scale ambiguity. A depth estimate is derived from patches using a minimum downhill approach and re-aligned based on the background information to get the correct depth map. Experimental results are presented for several synthetic and real images. Finally, this thesis also investigates the problem of the discrete approximation under perspective projection. The straightforward finite difference approximation for surface gradients used under orthographic projection is no longer applicable here. because the image position components are in fact functions of the depth. In this thesis, we provide a direct solution for the discrete approximation under perspective projection. The surface gradient is derived mathematically by relating the depth value of the surface point with the depth value of the corresponding image point. We also demonstrate how we can apply the new discrete approximation to a more complicated and realistic reflectance model for SFS problem

    Analysis and approximation of some Shape-from-Shading models for non-Lambertian surfaces

    Full text link
    The reconstruction of a 3D object or a scene is a classical inverse problem in Computer Vision. In the case of a single image this is called the Shape-from-Shading (SfS) problem and it is known to be ill-posed even in a simplified version like the vertical light source case. A huge number of works deals with the orthographic SfS problem based on the Lambertian reflectance model, the most common and simplest model which leads to an eikonal type equation when the light source is on the vertical axis. In this paper we want to study non-Lambertian models since they are more realistic and suitable whenever one has to deal with different kind of surfaces, rough or specular. We will present a unified mathematical formulation of some popular orthographic non-Lambertian models, considering vertical and oblique light directions as well as different viewer positions. These models lead to more complex stationary nonlinear partial differential equations of Hamilton-Jacobi type which can be regarded as the generalization of the classical eikonal equation corresponding to the Lambertian case. However, all the equations corresponding to the models considered here (Oren-Nayar and Phong) have a similar structure so we can look for weak solutions to this class in the viscosity solution framework. Via this unified approach, we are able to develop a semi-Lagrangian approximation scheme for the Oren-Nayar and the Phong model and to prove a general convergence result. Numerical simulations on synthetic and real images will illustrate the effectiveness of this approach and the main features of the scheme, also comparing the results with previous results in the literature.Comment: Accepted version to Journal of Mathematical Imaging and Vision, 57 page

    An approximation scheme for an Eikonal Equation with discontinuous coefficient

    Full text link
    We consider the stationary Hamilton-Jacobi equation where the dynamics can vanish at some points, the cost function is strictly positive and is allowed to be discontinuous. More precisely, we consider special class of discontinuities for which the notion of viscosity solution is well-suited. We propose a semi-Lagrangian scheme for the numerical approximation of the viscosity solution in the sense of Ishii and we study its properties. We also prove an a-priori error estimate for the scheme in an integral norm. The last section contains some applications to control and image processing problems

    Analysis of surface parametrizations for modern photometric stereo modeling

    Get PDF
    Tridimensional shape recovery based on Photometric Stereo (PS) recently received a strong improvement due to new mathematical models based on partial differential irradiance equation ratios. This modern approach to PS faces more realistic physical effects among which light attenuation and radial light propagation from a point light source. Since the approximation of the surface is performed with single step method, accurate reconstruction is prevented by sensitiveness to noise. In this paper we analyse a well-known parametrization of the tridimensional surface extending it on any auxiliary convex projection functions. Experiments on synthetic data show preliminary results where more accurate reconstruction can be achieved using more suitable parametrization specially in case of noisy input images

    Shape-from-shading using the heat equation

    Get PDF
    This paper offers two new directions to shape-from-shading, namely the use of the heat equation to smooth the field of surface normals and the recovery of surface height using a low-dimensional embedding. Turning our attention to the first of these contributions, we pose the problem of surface normal recovery as that of solving the steady state heat equation subject to the hard constraint that Lambert's law is satisfied. We perform our analysis on a plane perpendicular to the light source direction, where the z component of the surface normal is equal to the normalized image brightness. The x - y or azimuthal component of the surface normal is found by computing the gradient of a scalar field that evolves with time subject to the heat equation. We solve the heat equation for the scalar potential and, hence, recover the azimuthal component of the surface normal from the average image brightness, making use of a simple finite difference method. The second contribution is to pose the problem of recovering the surface height function as that of embedding the field of surface normals on a manifold so as to preserve the pattern of surface height differences and the lattice footprint of the surface normals. We experiment with the resulting method on a variety of real-world image data, where it produces qualitatively good reconstructed surfaces

    Opt: A Domain Specific Language for Non-linear Least Squares Optimization in Graphics and Imaging

    Full text link
    Many graphics and vision problems can be expressed as non-linear least squares optimizations of objective functions over visual data, such as images and meshes. The mathematical descriptions of these functions are extremely concise, but their implementation in real code is tedious, especially when optimized for real-time performance on modern GPUs in interactive applications. In this work, we propose a new language, Opt (available under http://optlang.org), for writing these objective functions over image- or graph-structured unknowns concisely and at a high level. Our compiler automatically transforms these specifications into state-of-the-art GPU solvers based on Gauss-Newton or Levenberg-Marquardt methods. Opt can generate different variations of the solver, so users can easily explore tradeoffs in numerical precision, matrix-free methods, and solver approaches. In our results, we implement a variety of real-world graphics and vision applications. Their energy functions are expressible in tens of lines of code, and produce highly-optimized GPU solver implementations. These solver have performance competitive with the best published hand-tuned, application-specific GPU solvers, and orders of magnitude beyond a general-purpose auto-generated solver

    Joint Material and Illumination Estimation from Photo Sets in the Wild

    Get PDF
    Faithful manipulation of shape, material, and illumination in 2D Internet images would greatly benefit from a reliable factorization of appearance into material (i.e., diffuse and specular) and illumination (i.e., environment maps). On the one hand, current methods that produce very high fidelity results, typically require controlled settings, expensive devices, or significant manual effort. To the other hand, methods that are automatic and work on 'in the wild' Internet images, often extract only low-frequency lighting or diffuse materials. In this work, we propose to make use of a set of photographs in order to jointly estimate the non-diffuse materials and sharp lighting in an uncontrolled setting. Our key observation is that seeing multiple instances of the same material under different illumination (i.e., environment), and different materials under the same illumination provide valuable constraints that can be exploited to yield a high-quality solution (i.e., specular materials and environment illumination) for all the observed materials and environments. Similar constraints also arise when observing multiple materials in a single environment, or a single material across multiple environments. The core of this approach is an optimization procedure that uses two neural networks that are trained on synthetic images to predict good gradients in parametric space given observation of reflected light. We evaluate our method on a range of synthetic and real examples to generate high-quality estimates, qualitatively compare our results against state-of-the-art alternatives via a user study, and demonstrate photo-consistent image manipulation that is otherwise very challenging to achieve
    • …
    corecore