2 research outputs found

    PAPER Special Section on Artificial Reality and Telexistence Calibration Free Virtual Display System Using Video Projector onto Real Object Surface

    No full text
    SUMMARY In this paper, we propose a novel virtual display system for a real object surface by using a video projector, so that the viewer can feel as if digital images are printed on the real surface with arbitrary shape. This system consists of an uncalibrated camera and video projector connected to a same PC and creates a virtual object by rendering 2D contents preserved beforehand onto a white object in a real world via a projector. For geometry registration between the rendered image and the object surface correctly, we regard the object surface as a set of a number of small rectangular regions and perform geometry registration by calculating homographies between the projector image plane and the each divided regions. By using such a homography-based method, we can avoid calibration of a camera and a projector that is necessary in a conventional method. In this system, we perform following two processes. First of all, we acquire the status of the object surface from images which capture the scene that color-coded checker patterns are projected on it and generate image rendered on it without distortion by calculating homographies. After once the projection image is generated, the rendered image can be updated if the object surface moves, or refined when it is stationary by observing the object surface. By this second process, the system always offers more accurate display. In implementation, we demonstrate our system in various conditions. This system enables it to project them as if it is printed on a real paper surface of a book. By using this system, we expect the realization of a virtual museum or other industrial application. key words: texture mapping onto arbitrary shaped surface, camera, video projector, homography, calibration free 1

    PAPER Special Section on Artificial Reality and Telexistence Registration of Partial 3D Point Clouds Acquired from a Multi-view Camera for Indoor Scene Reconstruction

    No full text
    SUMMARY In this paper, a novel projection-based method is presented to register partial 3D point clouds, acquired from a multi-view camera, for 3D reconstruction of an indoor scene. In general, conventional registration methods for partial 3D point clouds require a high computational complexity and much time for registration. Moreover, these methods are not robust for 3D point cloud which has a low precision. To overcome these drawbacks, a projection-based registration method is proposed. Firstly, depth images are refined based on both temporal and spatial properties. The former involves excluding 3D points with large variation, and the latter fills up holes referring to four neighboring 3D points, respectively. Secondly, 3D point clouds acquired from two views are projected onto the same image plane, and two-step integer mapping is applied to search for correspondences through the modified KLT. Then, fine registration is carried out by minimizing distance errors based on adaptive search range. Finally, we calculate a final color referring to the colors of corresponding points and reconstruct an indoor scene by applying the above procedure to consecutive scenes. The proposed method not only reduces computational complexity by searching for correspondences on a 2D image plane, but also enables effective registration even for 3D points which have a low precision. Furthermore, only a few color and depth images are needed to reconstruct an indoor scene. The generated model can be adopted for interaction with as well as navigation in a virtual environment. key words: projection-based registration, virtual environment generation, multi-view camera, scene reconstruction 1
    corecore