3 research outputs found

    Pixel2point: 3D Object Reconstruction From a Single Image Using CNN and Initial Sphere

    Get PDF

    Towards L-System Captioning for Tree Reconstruction

    Full text link
    This work proposes a novel concept for tree and plant reconstruction by directly inferring a Lindenmayer-System (L-System) word representation from image data in an image captioning approach. We train a model end-to-end which is able to translate given images into L-System words as a description of the displayed tree. To prove this concept, we demonstrate the applicability on 2D tree topologies. Transferred to real image data, this novel idea could lead to more efficient, accurate and semantically meaningful tree and plant reconstruction without using error-prone point cloud extraction, and other processes usually utilized in tree reconstruction. Furthermore, this approach bypasses the need for a predefined L-System grammar and enables species-specific L-System inference without biological knowledge.Comment: Eurographics 202

    Pixel2point: 3D object reconstruction from a single image using CNN and initial sphere

    No full text
    3D reconstruction from a single image has many useful applications. However, it is a challenging and ill-posed problem as various candidates can be a solution for the reconstruction. In this paper, we propose a simple, yet powerful, CNN model that generates a point cloud of an object from a single image. 3D data can be represented in different ways. Point clouds have proven to be a common and simple representation. The proposed model was trained end-to-end on synthetic data with 3D supervision. It takes a single image of an object and generates a point cloud with a fixed number of points. An initial point cloud of a sphere shape is used to improve the generated point cloud. The proposed model was tested on synthetic and real data. Qualitative evaluations demonstrate that the proposed model is able to generate point clouds that are very close to the ground-truth. Also, the initial point cloud has improved the final results as it distributes the points on the object surface evenly. Furthermore, the proposed method outperforms the state-of-the-art in solving this problem quantitatively and qualitatively on synthetic and real images. The proposed model illustrates an outstanding generalization to the new and unseen images and scenes.TU Berlin, Open-Access-Mittel – 202
    corecore