107,026 research outputs found

    Shape from Shading through Shape Evolution

    Full text link
    In this paper, we address the shape-from-shading problem by training deep networks with synthetic images. Unlike conventional approaches that combine deep learning and synthetic imagery, we propose an approach that does not need any external shape dataset to render synthetic images. Our approach consists of two synergistic processes: the evolution of complex shapes from simple primitives, and the training of a deep network for shape-from-shading. The evolution generates better shapes guided by the network training, while the training improves by using the evolved shapes. We show that our approach achieves state-of-the-art performance on a shape-from-shading benchmark

    Object recognition using shape-from-shading

    Get PDF
    This paper investigates whether surface topography information extracted from intensity images using a recently reported shape-from-shading (SFS) algorithm can be used for the purposes of 3D object recognition. We consider how curvature and shape-index information delivered by this algorithm can be used to recognize objects based on their surface topography. We explore two contrasting object recognition strategies. The first of these is based on a low-level attribute summary and uses histograms of curvature and orientation measurements. The second approach is based on the structural arrangement of constant shape-index maximal patches and their associated region attributes. We show that region curvedness and a string ordering of the regions according to size provides recognition accuracy of about 96 percent. By polling various recognition schemes. including a graph matching method. we show that a recognition rate of 98-99 percent is achievable

    Terrain analysis using radar shape-from-shading

    Get PDF
    This paper develops a maximum a posteriori (MAP) probability estimation framework for shape-from-shading (SFS) from synthetic aperture radar (SAR) images. The aim is to use this method to reconstruct surface topography from a single radar image of relatively complex terrain. Our MAP framework makes explicit how the recovery of local surface orientation depends on the whereabouts of terrain edge features and the available radar reflectance information. To apply the resulting process to real world radar data, we require probabilistic models for the appearance of terrain features and the relationship between the orientation of surface normals and the radar reflectance. We show that the SAR data can be modeled using a Rayleigh-Bessel distribution and use this distribution to develop a maximum likelihood algorithm for detecting and labeling terrain edge features. Moreover, we show how robust statistics can be used to estimate the characteristic parameters of this distribution. We also develop an empirical model for the SAR reflectance function. Using the reflectance model, we perform Lambertian correction so that a conventional SFS algorithm can be applied to the radar data. The initial surface normal direction is constrained to point in the direction of the nearest ridge or ravine feature. Each surface normal must fall within a conical envelope whose axis is in the direction of the radar illuminant. The extent of the envelope depends on the corrected radar reflectance and the variance of the radar signal statistics. We explore various ways of smoothing the field of surface normals using robust statistics. Finally, we show how to reconstruct the terrain surface from the smoothed field of surface normal vectors. The proposed algorithm is applied to various SAR data sets containing relatively complex terrain structure

    Shape-from-shading using the heat equation

    Get PDF
    This paper offers two new directions to shape-from-shading, namely the use of the heat equation to smooth the field of surface normals and the recovery of surface height using a low-dimensional embedding. Turning our attention to the first of these contributions, we pose the problem of surface normal recovery as that of solving the steady state heat equation subject to the hard constraint that Lambert's law is satisfied. We perform our analysis on a plane perpendicular to the light source direction, where the z component of the surface normal is equal to the normalized image brightness. The x - y or azimuthal component of the surface normal is found by computing the gradient of a scalar field that evolves with time subject to the heat equation. We solve the heat equation for the scalar potential and, hence, recover the azimuthal component of the surface normal from the average image brightness, making use of a simple finite difference method. The second contribution is to pose the problem of recovering the surface height function as that of embedding the field of surface normals on a manifold so as to preserve the pattern of surface height differences and the lattice footprint of the surface normals. We experiment with the resulting method on a variety of real-world image data, where it produces qualitatively good reconstructed surfaces

    A graph-spectral approach to shape-from-shading

    Get PDF
    In this paper, we explore how graph-spectral methods can be used to develop a new shape-from-shading algorithm. We characterize the field of surface normals using a weight matrix whose elements are computed from the sectional curvature between different image locations and penalize large changes in surface normal direction. Modeling the blocks of the weight matrix as distinct surface patches, we use a graph seriation method to find a surface integration path that maximizes the sum of curvature-dependent weights and that can be used for the purposes of height reconstruction. To smooth the reconstructed surface, we fit quadrics to the height data for each patch. The smoothed surface normal directions are updated ensuring compliance with Lambert's law. The processes of height recovery and surface normal adjustment are interleaved and iterated until a stable surface is obtained. We provide results on synthetic and real-world imagery

    New constraints on data-closeness and needle map consistency for shape-from-shading

    Get PDF
    This paper makes two contributions to the problem of needle-map recovery using shape-from-shading. First, we provide a geometric update procedure which allows the image irradiance equation to be satisfied as a hard constraint. This not only improves the data closeness of the recovered needle-map, but also removes the necessity for extensive parameter tuning. Second, we exploit the improved ease of control of the new shape-from-shading process to investigate various types of needle-map consistency constraint. The first set of constraints are based on needle-map smoothness. The second avenue of investigation is to use curvature information to impose topographic constraints. Third, we explore ways in which the needle-map is recovered so as to be consistent with the image gradient field. In each case we explore a variety of robust error measures and consistency weighting schemes that can be used to impose the desired constraints on the recovered needle-map. We provide an experimental assessment of the new shape-from-shading framework on both real world images and synthetic images with known ground truth surface normals. The main conclusion drawn from our analysis is that the data-closeness constraint improves the efficiency of shape-from-shading and that both the topographic and gradient consistency constraints improve the fidelity of the recovered needle-map
    • …
    corecore