9 research outputs found

    Shape reconstruction from shading using linear approximation

    Get PDF
    Shape from shading (SFS) deals with the recovery of 3D shape from a single monocular image. This problem was formally introduced by Horn in the early 1970s. Since then it has received considerable attention, and several efforts have been made to improve the shape recovery. In this thesis, we present a fast SFS algorithm, which is a purely local method and is highly parallelizable. In our approach, we first use the discrete approximations for surface gradients, p and q, using finite differences, then linearize the reflectance function in depth, Z ( x , y), instead of p and q. This method is simple and efficient, and yields better results for images with central illumination or low-angle illumination. Furthermore, our method is more general, and can be applied to either Lambertian surfaces or specular surfaces. The algorithm has been tested on several synthetic and real images of both Lambertian and specular surfaces, and good results have been obtained. However, our method assumes that the input image contains only single object with uniform albedo values, which is commonly assumed in most SFS methods. Our algorithm performs poorly on images with nonuniform albedo values and produces incorrect shape for images containing objects with scale ambiguity, because those images violate the basic assumptions made by our SFS method. Therefore, we extended our method for images with nonuniform albedo values. We first estimate the albedo values for each pixel, and segment the scene into regions with uniform albedo values. Then we adjust the intensity value for each pixel by dividing the corresponding albedo value before applying our linear shape from shading method. This way our modified method is able to deal with nonuniform albedo values. When multiple objects differing only in scale are present in a scene, there may be points with the same surface orientation but different depth values. No existing SFS methods can solve this kind of ambiguity directly. We also present a new approach to deal with images containing multiple objects with scale ambiguity. A depth estimate is derived from patches using a minimum downhill approach and re-aligned based on the background information to get the correct depth map. Experimental results are presented for several synthetic and real images. Finally, this thesis also investigates the problem of the discrete approximation under perspective projection. The straightforward finite difference approximation for surface gradients used under orthographic projection is no longer applicable here. because the image position components are in fact functions of the depth. In this thesis, we provide a direct solution for the discrete approximation under perspective projection. The surface gradient is derived mathematically by relating the depth value of the surface point with the depth value of the corresponding image point. We also demonstrate how we can apply the new discrete approximation to a more complicated and realistic reflectance model for SFS problem

    Reconstruction et lissage de surfaces discrètes

    Get PDF
    Reconstruire une surface à partir des informations photométriques contenues dans une ou plusieurs images constitue un problème classique dans le domaine de la vision par ordinateur. Dans cette thèse, à travers un travail en collaboration avec des archéologues de l'IPGQ(Institut de Préhistoire et de Géologie du Quaternaire), nous nous sommes penchés sur le problème de reconstruction et d'extraction de paramètres de surfaces discrètes. Dans un premier temps, nous avons considéré le problème de reconstruction de surfaces à travers une approche discrète, en combinant les informations géométriques de la surface discrète que l'on reconstruit avec les informations photométriques contenues dans une ou plusieurs images. Nous avons pu définir une première méthode basée sur la propagation de contours discrets par niveaux d'iso-altitude. Même si cette approche a pu donner des résultats intéressants sur des images synthétiques, nous nous sommes orientés vers une autre approche beaucoup plus robuste. Cette deuxième méthode est basée sur la propagation de régions d'iso-altitude en considérant le contour d'iso-altitude de manière implicite. La reconstruction a pu montrer une grande résistance vis à vis du bruit et des informations photométriques. Cette méthode permet à travers les patchs de résoudre explicitement les ambiguïtés concaves/convexes lorsqu'une seule source lumineuse frontale est utilisée pour la reconstruction. De plus, étant donné que notre approche ne se base pas uniquement sur l'expression analytique de la fonction de réflectance (habituellement Lambertienne), nous avons pu effectuer la reconstruction d'objets réels spéculaires en considérant d'autres modèles de réflection tel que le modèle de Nayar. Enfin, nous avons pu montrer des résultats originaux permettant d'effectuer une reconstruction à partir de plusieurs dessins associés à plusieurs directions d'éclairage. Les résultats permettent d'envisager un concept original pour définir des formes à partir d'images associées à une surface imaginaire. Dans une deuxième partie, nous introduisons une nouvelle méthode réversible de lissage de surfaces discrètes. Cette méthode est basée sur l'estimation des caractéristiques du plan discret à partir d'un critère statistique et géométrique. Le critère statistique se base sur la répartition des différents types de surfels présents sur la surface, tandis que le critère géométrique est défini à partir des inégalités du plan discret. à partir de ces caractéristiques, nous définissons ensuite une surface à travers la projection des points discrets sur le plan tangent. Cette projection présente la propriété de transformer les points de Z3 dans R3 tout en étant réversible. La nouvelle surface Euclidienne résultante de cette transformation est à la fois utile pour l'extraction de paramètres géométriques et pour la visualisation sans aucune perte d'information par rapport à la surface discrète initiale.Shape reconstruction from shading informations contained in one or several images constitutes a classical problem in the field of computer vision. In this thesis, through a collaboration with archaeologists from the IPGQ (Institute of Prehistory and Quaternary Geology), we focus on the reconstruction and parameter extraction of discrete surfaces. First, we consider the problem of surface reconstruction using a discrete approach by combining geometric informations of the surface which is going to be reconstructed and photometric informations from one or several shading images. We have defined a new approach based on the propagation of discrete equal height contours. This approach gives good results on simple synthetic images, but we have chosen another approach in order to obtain more robustness on real images. This second method is based on the same idea through the propagation of equal height regions (called patch). The resulting reconstruction method gives robust results both on the point of view of photometric informations and noise. Moreover, it allows to explicitly solve the concave/convex ambiguity when only one light source direction (in the direction of the observer) is used for the reconstruction. Furthermore, since the reconstruction does not use the analytical expression of the reflectance map (usually Lambertian), the reconstruction was applied with other reflectance models such as the specular Nayar's model of reflectance. Finally, we have presented some original reconstructions obtained from several drawings associated with several light source directions. These results can offer new perspectives to define an intuitive way for shape modelling from shading images. In a second part, we introduce a new reversible method for discrete surface smoothing. This method is based on the estimation of the discrete plan characteristics using a statistical and geometrical criteria. The statistical criteria use the allocation of different types of discrete surface elements called surfel and the geometrical criteria is defined from the inequalities of the discrete plane. From the characteristics of the discrete tangent plane, a surface net is deduced by projecting the centers of voxels to the real tangent plane. This projection transforming the surface points from Z3 to R3 has the property to be reversible. Thus, it allows to obtain a new Euclidean surface net which can be used both for the extraction of geometrical parameters and for visualization

    Shape from photomotion

    No full text

    Shape From Photomotion

    No full text
    We introduce a new technique called shape from photomotion. It uses a series of 2-D Lambertian images, generated by moving a light source around a scene, to recover the depth map. In each of the images, the object in the scene remains at a fixed position and the only variable is the light source direction. The movement of the light source causes a change in the intensity of any given point in the image. The change in intensity is what enables us to recover the unknown parameter, the depth map, since it remains constant in each of the input images. Our method differs from photometric stereo in the sense that the shape estimate is not only computed for each light source orientation, but also gradually refined by photomotion

    Shape From Photomotion

    No full text
    We introduce a new technique called shape from photomotion. It uses a series of 2-D Lambertian images, generated by moving a light source around a scene, to recover the depth map. In each of the images, the object in the scene remains at a fixed position and the only variable is the light source direction. The movement of the light source causes a change in the intensity of any given point in the image. The change in intensity is what enables us to recover the unknown parameter, the depth map, since it remains constant in each of the input images. Our method differs from photometric stereo in the sense that the shape estimate is not only computed for each light source orientation, but also gradually refined by photomotion. 1 Introduction Shape from shading uses a single image to recover the shape information. It requires the least amount of input, however, this also introduces disadvantages. One disadvantage is that since it has less image information available, it is less accurate. A..

    ARTICLE NO. 0016

    No full text
    Traditional shape from shading techniques, using a single Shape from shading uses a single light source, i.e., one image as input, to recover the shape information [4, 10, image, do not reconstruct accurate surfaces and have difficulty 17]. It has the advantage that it requires the least amount with shadow areas. Traditional shape from photometric stereo techniques have the disadvantage that they need all of the input images together at once to minimize the total cost, and this process must be restarted if new images become available. To overcome the shortcomings of the above two techniques, we introduce a new technique called shape from photomotion. Shape from photomotion uses a series of 2-D Lambertian input images, generated by moving a light source around a scene, to recover the depth map. In each of the input images, the object of input; however, this also introduces disadvantages. One disadvantage is that since it has less image informatio

    Photomotion

    No full text
    Traditional shape from shading techniques, using a single image, do not reconstruct accurate surfaces and have difficulty with shadow areas. Traditional shape from photometric stereo techniques have the disadvantage that they need all of the input images together at once to minimize the total cost, and this process must be restarted if new images become available. To overcome the shortcomings of the above two techniques, we introduce a new technique called shape from photomotion. Shape from photomotion uses a series of 2-D Lambertian input images, generated by moving a light source around a scene, to recover the depth map. In each of the input images, the object in the scene remains at a fixed position and the only variable is the light source direction. The movement of the Light source causes a change in the intensity of any given point in the image. The change in intensity is what enables us to recover the unknown parameter, the depth map, since it remains constant in each of the input images. This configuration is suitable for iterative refinement through the use of the extended Kalman filter. Our novel method for computing shape is a continuous form of the photometric stereo technique. It significantly differs from photometric stereo in the sense that the shape estimate will not only be computed for each light source orientation, but also gradually be refined by photomotion. Since the camera is fixed, the mapping between the depths at various light source locations is known; therefore, this method has an advantage over those which move the camera (egomotion) and keep the light source fixed. Results of this method are presented for sequences of synthetic and real images. (C) 1996 Academic Press, Inc

    Photomotion

    No full text
    Traditional shape from shading techniques, using a single image, do not reconstruct accurate surfaces and have difficulty with shadow areas. Traditional shape from photometric stereo techniques have the disadvantage that they need all of the input images together at once to minimize the total cost, and this process must be restarted if new images become available. To overcome the shortcomings of the above two techniques, we introduce a new technique called shape from photomotion. Shape from photomotion uses a series of 2-D Lambertian input images, generated by moving a light source around a scene, to recover the depth map. In each of the input images, the object in the scene remains at a fixed position and the only variable is the light source direction. The movement of the light source causes a change in the intensity of any given point in the image. The change in intensity is what enables us to recover the unknown parameter, the depth map, since it remains constant in each of the input images. This configuration is suitable for iterative refinement through the use of the extended Kalman filter. Our novel method for computing shape is a continuous form of the photometric stereo technique. It significantly differs from photometric stereo in the sense that the shape estimate will not only be computed for each light source orientation, but also gradually be refined by photomotion. Since the camera is fixed, the mapping between the depths at various light source locations is known; therefore, this method has an advantage over those which move the camera (egomotion) and keep the light source fixed. Results of this method are presented for sequences of synthetic and real images. © 1996 Academic Press, Inc
    corecore