109,505 research outputs found

    Shape from Shading through Shape Evolution

    Full text link
    In this paper, we address the shape-from-shading problem by training deep networks with synthetic images. Unlike conventional approaches that combine deep learning and synthetic imagery, we propose an approach that does not need any external shape dataset to render synthetic images. Our approach consists of two synergistic processes: the evolution of complex shapes from simple primitives, and the training of a deep network for shape-from-shading. The evolution generates better shapes guided by the network training, while the training improves by using the evolved shapes. We show that our approach achieves state-of-the-art performance on a shape-from-shading benchmark

    New constraints on data-closeness and needle map consistency for shape-from-shading

    Get PDF
    This paper makes two contributions to the problem of needle-map recovery using shape-from-shading. First, we provide a geometric update procedure which allows the image irradiance equation to be satisfied as a hard constraint. This not only improves the data closeness of the recovered needle-map, but also removes the necessity for extensive parameter tuning. Second, we exploit the improved ease of control of the new shape-from-shading process to investigate various types of needle-map consistency constraint. The first set of constraints are based on needle-map smoothness. The second avenue of investigation is to use curvature information to impose topographic constraints. Third, we explore ways in which the needle-map is recovered so as to be consistent with the image gradient field. In each case we explore a variety of robust error measures and consistency weighting schemes that can be used to impose the desired constraints on the recovered needle-map. We provide an experimental assessment of the new shape-from-shading framework on both real world images and synthetic images with known ground truth surface normals. The main conclusion drawn from our analysis is that the data-closeness constraint improves the efficiency of shape-from-shading and that both the topographic and gradient consistency constraints improve the fidelity of the recovered needle-map

    The perception of shape from shading in a new light

    Get PDF
    How do humans see three-dimensional shape based on two-dimensional shading? Much research has assumed that a ‘light from above’ bias solves the ambiguity of shape from shading. Counter to the ‘light from above’ bias, studies of Bayesian priors have found that such a bias can be swayed by other light cues. Despite the persuasive power of the Bayesian models, many new studies and books cite the original ‘light from above’ findings. Here I present a version of the Bayesian result that can be experienced. The perception of shape-from-shading was found here to be influenced by an external light source, even when the light was obstructed and did not directly illuminate a two-dimensional stimulus. The results imply that this effect is robust and not low-level in nature. The perception of shape from shading is not necessarily based on a hard-wired internal representation of lighting direction, but rather assesses the direction of lighting in the scene adaptively. Here, for the first time, is an experiential opportunity to see what the Bayesian models have supported all along

    Shape from Shading Using MRF Optimization with Gibbs Sampling with Quadruplet Cliques

    Get PDF
    This paper extends the MRF formulation approach developed solving the shape from shading problem. Our method extends the Gibbs sampling approach to solve an MRF formulation which characterizes the Shape from Shading (SFS) problem under Lambertian reflectance conditions (the algorithm is extensible to other lighting models). Our method uses a simpler set of energy functions (on point quadruplets), which is faster to converge, but less accurate

    Equivalence of oblique and frontal illumination in perspective shape from shading

    Get PDF
    In this paper, it is shown that any oblique illumination shape-from-shading problem under perspective projection for Lambertian reflection and a single distant light source can be converted to an equivalent frontal illumination problem by a simple nonlinear intensity transformation which is equivalent to a rectification in stereo vision. Remarkably, it involves no approximation of depth. The method is evaluated on perspective shape-from-shading involving wide range of oblique angles. © 2007 IEEE.published_or_final_versio

    Linear Differential Constraints for Photo-polarimetric Height Estimation

    Full text link
    In this paper we present a differential approach to photo-polarimetric shape estimation. We propose several alternative differential constraints based on polarisation and photometric shading information and show how to express them in a unified partial differential system. Our method uses the image ratios technique to combine shading and polarisation information in order to directly reconstruct surface height, without first computing surface normal vectors. Moreover, we are able to remove the non-linearities so that the problem reduces to solving a linear differential problem. We also introduce a new method for estimating a polarisation image from multichannel data and, finally, we show it is possible to estimate the illumination directions in a two source setup, extending the method into an uncalibrated scenario. From a numerical point of view, we use a least-squares formulation of the discrete version of the problem. To the best of our knowledge, this is the first work to consider a unified differential approach to solve photo-polarimetric shape estimation directly for height. Numerical results on synthetic and real-world data confirm the effectiveness of our proposed method.Comment: To appear at International Conference on Computer Vision (ICCV), Venice, Italy, October 22-29, 201
    • …
    corecore