6 research outputs found

    A fast 3D reconstruction system with a low-cost camera accessory

    Get PDF
    Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object

    Sequential Estimation of Entire 3D Shape and Reflectance Property of an Object with Light Field Camera

    Get PDF
    We discuss a method which sequentially estimates entire 3D shape and reflectance property of object. Our method has novelty that use rays in both entire 3D shape and reflectance estimation. We obtain rays on light field and images with a slightly different viewpoint (called Sub-aperture image) when we take light field. Therefore, we can get dense 3D point cloud and reflected light. To estimate these values, we use a light field camera to capture multi-view light fields. First, we acquire partial 3D point cloud by linear structure from motion for light field cameras (Johannsen et al., 2015). Second, we get entire 3D point cloud to integrate from partial shapes, and then we generate a mesh from the point cloud. Third, we simultaneously estimate normal vectors and a light source from the mesh. Finally, we estimate reflectance parameters of a reflection model with the mesh, normal vectors, and the light source. We also experimented to estimation by light field CG image

    What Camera Motion Reveals About Shape With Unknown BRDF

    No full text
    Recent works have considered shape recovery for an object of unknown BRDF using light source or object motions. This paper proposes a theory that addresses the remaining problem of determining shape from the (small or differential) motion of the camera, for unknown isotropic BRDFs. Our theory derives a differential stereo relation that relates camera motion to depth of a surface with unknown isotropic BRDF, which generalizes traditional Lambertian assumptions. Under orthographic projection, we show shape may not be constrained by differential stereo for general isotropic BRDFs, but two motions suffice to yield an invariant for several restricted (still unknown) BRDFs exhibited by common materials. For the perspective case, we show that three differential motions suffice to yield the surface depth for unknown isotropic BRDF and unknown directional lighting, while additional constraints are obtained with restrictions on the BRDF or lighting. The limits imposed by our theory are intrinsic to the shape recovery problem and independent of choice of reconstruction method. We outline with experiments how potential reconstruction methods may exploit our theory. We also illustrate trends shared by theories on shape from differential motion of light source, object or camera, to relate hardness of surface reconstruction to complexity of imaging setup. 1

    Scene Reconstruction Beyond Structure-from-Motion and Multi-View Stereo

    Get PDF
    Image-based 3D reconstruction has become a robust technology for recovering accurate and realistic models of real-world objects and scenes. A common pipeline for 3D reconstruction is to first apply Structure-from-Motion (SfM), which recovers relative poses for the input images and sparse geometry for the scene, and then apply Multi-view Stereo (MVS), which estimates a dense depthmap for each image. While this two-stage process is quite effective in many 3D modeling scenarios, there are limits to what can be reconstructed. This dissertation focuses on three particular scenarios where the SfM+MVS pipeline fails and introduces new approaches to accomplish each reconstruction task. First, I introduce a novel method to recover dense surface reconstructions of endoscopic video. In this setting, SfM can generally provide sparse surface structure, but the lack of surface texture as well as complex, changing illumination often causes MVS to fail. To overcome these difficulties, I introduce a method that utilizes SfM both to guide surface reflectance estimation and to regularize shading-based depth reconstruction. I also introduce models of reflectance and illumination that improve the final result. Second, I introduce an approach for augmenting 3D reconstructions from large-scale Internet photo-collections by recovering the 3D position of transient objects --- specifically, people --- in the input imagery. Since no two images can be assumed to capture the same person in the same location, the typical triangulation constraints enjoyed by SfM and MVS cannot be directly applied. I introduce an alternative method to approximately triangulate people who stood in similar locations, aided by a height distribution prior and visibility constraints provided by SfM. The scale of the scene, gravity direction, and per-person ground-surface normals are also recovered. Finally, I introduce the concept of using crowd-sourced imagery to create living 3D reconstructions --- visualizations of real places that include dynamic representations of transient objects. A key difficulty here is that SfM+MVS pipelines often poorly reconstruct ground surfaces given Internet images. To address this, I introduce a volumetric reconstruction approach that leverages scene scale and person placements. Crowd simulation is then employed to add virtual pedestrians to the space and bring the reconstruction "to life."Doctor of Philosoph
    corecore