21 research outputs found

    Shape-from-shading using the heat equation

    Get PDF
    This paper offers two new directions to shape-from-shading, namely the use of the heat equation to smooth the field of surface normals and the recovery of surface height using a low-dimensional embedding. Turning our attention to the first of these contributions, we pose the problem of surface normal recovery as that of solving the steady state heat equation subject to the hard constraint that Lambert's law is satisfied. We perform our analysis on a plane perpendicular to the light source direction, where the z component of the surface normal is equal to the normalized image brightness. The x - y or azimuthal component of the surface normal is found by computing the gradient of a scalar field that evolves with time subject to the heat equation. We solve the heat equation for the scalar potential and, hence, recover the azimuthal component of the surface normal from the average image brightness, making use of a simple finite difference method. The second contribution is to pose the problem of recovering the surface height function as that of embedding the field of surface normals on a manifold so as to preserve the pattern of surface height differences and the lattice footprint of the surface normals. We experiment with the resulting method on a variety of real-world image data, where it produces qualitatively good reconstructed surfaces

    3D Modeling from Multiple Projections: Parallel-Beam to Helical Cone-Beam Trajectory

    Get PDF
    Tomographic imaging is a technique for exploration of a cross-section of an inspected object without destruction. Normally, the input data, known as the projections, are gathered by repeatedly radiating coherent waveform through the object in a number of viewpoints, and receiving by an array of corresponding detector in the opposite position. In this research, as a replacement of radiographs, the series of photographs taken around the opaque object under the ambient light is completely served as the projections. The purposed technique can be adopted with various beam geometry including parallel-beam, cone-beam and spiral cone-beam geometry. From the process of tomography, the outcome is the stack of pseudo cross-sectional image. Not the internal of cross section is authentic, but the edge or contour is valid

    Terrain analysis using radar shape-from-shading

    Get PDF
    This paper develops a maximum a posteriori (MAP) probability estimation framework for shape-from-shading (SFS) from synthetic aperture radar (SAR) images. The aim is to use this method to reconstruct surface topography from a single radar image of relatively complex terrain. Our MAP framework makes explicit how the recovery of local surface orientation depends on the whereabouts of terrain edge features and the available radar reflectance information. To apply the resulting process to real world radar data, we require probabilistic models for the appearance of terrain features and the relationship between the orientation of surface normals and the radar reflectance. We show that the SAR data can be modeled using a Rayleigh-Bessel distribution and use this distribution to develop a maximum likelihood algorithm for detecting and labeling terrain edge features. Moreover, we show how robust statistics can be used to estimate the characteristic parameters of this distribution. We also develop an empirical model for the SAR reflectance function. Using the reflectance model, we perform Lambertian correction so that a conventional SFS algorithm can be applied to the radar data. The initial surface normal direction is constrained to point in the direction of the nearest ridge or ravine feature. Each surface normal must fall within a conical envelope whose axis is in the direction of the radar illuminant. The extent of the envelope depends on the corrected radar reflectance and the variance of the radar signal statistics. We explore various ways of smoothing the field of surface normals using robust statistics. Finally, we show how to reconstruct the terrain surface from the smoothed field of surface normal vectors. The proposed algorithm is applied to various SAR data sets containing relatively complex terrain structure

    Patient-specific bronchoscope simulation with pq-space-based 2D/3D registration

    No full text
    Objective: The use of patient-specific models for surgical simulation requires photorealistic rendering of 3D structure and surface properties. For bronchoscope simulation, this requires augmenting virtual bronchoscope views generated from 3D tomographic data with patient-specific bronchoscope videos. To facilitate matching of video images to the geometry extracted from 3D tomographic data, this paper presents a new pq-space-based 2D/3D registration method for camera pose estimation in bronchoscope tracking. Methods: The proposed technique involves the extraction of surface normals for each pixel of the video images by using a linear local shape-from-shading algorithm derived from the unique camera/lighting constraints of the endoscopes. The resultant pq-vectors are then matched to those of the 3D model by differentiation of the z-buffer. A similarity measure based on angular deviations of the pq-vectors is used to provide a robust 2D/3D registration framework. Localization of tissue deformation is considered by assessing the temporal variation of the pq-vectors between subsequent frames. Results: The accuracy of the proposed method was assessed by using an electromagnetic tracker and a specially constructed airway phantom. Preliminary in vivo validation of the proposed method was performed on a matched patient bronchoscope video sequence and 3D CT data. Comparison to existing intensity-based techniques was also made. Conclusion: The proposed method does not involve explicit feature extraction and is relatively immune to illumination changes. The temporal variation of the pq distribution also permits the identification of localized deformation, which offers an effective way of excluding such areas from the registration process

    Stanford-ORB: A Real-World 3D Object Inverse Rendering Benchmark

    Full text link
    We introduce Stanford-ORB, a new real-world 3D Object inverse Rendering Benchmark. Recent advances in inverse rendering have enabled a wide range of real-world applications in 3D content generation, moving rapidly from research and commercial use cases to consumer devices. While the results continue to improve, there is no real-world benchmark that can quantitatively assess and compare the performance of various inverse rendering methods. Existing real-world datasets typically only consist of the shape and multi-view images of objects, which are not sufficient for evaluating the quality of material recovery and object relighting. Methods capable of recovering material and lighting often resort to synthetic data for quantitative evaluation, which on the other hand does not guarantee generalization to complex real-world environments. We introduce a new dataset of real-world objects captured under a variety of natural scenes with ground-truth 3D scans, multi-view images, and environment lighting. Using this dataset, we establish the first comprehensive real-world evaluation benchmark for object inverse rendering tasks from in-the-wild scenes, and compare the performance of various existing methods.Comment: NeurIPS 2023 Datasets and Benchmarks Track. The first two authors contributed equally to this work. Project page: https://stanfordorb.github.io

    Shape from Shading法を用いた天体表面の斜面推定に関する研究

    Get PDF
    学位の種別: 修士University of Tokyo(東京大学

    Shape reconstruction from shading using linear approximation

    Get PDF
    Shape from shading (SFS) deals with the recovery of 3D shape from a single monocular image. This problem was formally introduced by Horn in the early 1970s. Since then it has received considerable attention, and several efforts have been made to improve the shape recovery. In this thesis, we present a fast SFS algorithm, which is a purely local method and is highly parallelizable. In our approach, we first use the discrete approximations for surface gradients, p and q, using finite differences, then linearize the reflectance function in depth, Z ( x , y), instead of p and q. This method is simple and efficient, and yields better results for images with central illumination or low-angle illumination. Furthermore, our method is more general, and can be applied to either Lambertian surfaces or specular surfaces. The algorithm has been tested on several synthetic and real images of both Lambertian and specular surfaces, and good results have been obtained. However, our method assumes that the input image contains only single object with uniform albedo values, which is commonly assumed in most SFS methods. Our algorithm performs poorly on images with nonuniform albedo values and produces incorrect shape for images containing objects with scale ambiguity, because those images violate the basic assumptions made by our SFS method. Therefore, we extended our method for images with nonuniform albedo values. We first estimate the albedo values for each pixel, and segment the scene into regions with uniform albedo values. Then we adjust the intensity value for each pixel by dividing the corresponding albedo value before applying our linear shape from shading method. This way our modified method is able to deal with nonuniform albedo values. When multiple objects differing only in scale are present in a scene, there may be points with the same surface orientation but different depth values. No existing SFS methods can solve this kind of ambiguity directly. We also present a new approach to deal with images containing multiple objects with scale ambiguity. A depth estimate is derived from patches using a minimum downhill approach and re-aligned based on the background information to get the correct depth map. Experimental results are presented for several synthetic and real images. Finally, this thesis also investigates the problem of the discrete approximation under perspective projection. The straightforward finite difference approximation for surface gradients used under orthographic projection is no longer applicable here. because the image position components are in fact functions of the depth. In this thesis, we provide a direct solution for the discrete approximation under perspective projection. The surface gradient is derived mathematically by relating the depth value of the surface point with the depth value of the corresponding image point. We also demonstrate how we can apply the new discrete approximation to a more complicated and realistic reflectance model for SFS problem

    Classification of skin hyper-pigmentation lesions with multi-spectral images

    Get PDF
    According to clinical protocols, skin diseases are quantified by dermatologists throughout a treatment period, and then a statistical test on these measures allows to evaluate a treatment efficacy. The first step of this process it to classify pathological interest areas. This task is challenging due to the high variability of the images in one clinical data set. In this report, we first review algorithms that exist in the literature and adapt them to our problem. Then we choose the more appropriate algorithm to design a classification strategy. Thereby, we propose to use data reduction combined with SVM to do a first classification of the disease. Then we associate the obtained classification map with a segmentation map in an "interactive classification tool" in order to compromise between operator dependency and algorithm robustness
    corecore