10 research outputs found

    TRIDIMENSIONAL (3D) MODELING OF TRUNKS AND COMMERCIAL LOGS OF Tectona grandis L.f.

    Get PDF
    The development and application of technological innovations have been some of the main focuses of the productive sectors. One technological innovation that has been gaining prominence is the three-dimensional modeling (3D) of inventory and forest planning activities. In this study, it was possible to assess the accuracy of three-dimensional scanning of trunks of Tectona grandis L.f. by digital photogrammetry by using photos taken with a smartphone. The study was carried out in a Teak plantation of 46.54 ha. Systematic sampling by plots was used. Based on the diametric distribution of the stand, thirty trees were selected for the three-dimensional scanning procedure by the Close to Range technique. After three-dimensional scanning, each tree was scaled by the Smalian method and sectioned into 2.35 meters long logs for assessment using the xylometer method. The thirty trees resulted in 121 logs that had their volumes measured using the xylometer. Using three-dimensional modeling, it was possible to model and measure the volume of 71 logs in classes A (0.10 to 2.45), B (2.45 to 4.80), and C (4.80 to 7.15). The trunk of the trees could not be modeled for all trees at the highest heights because of the quality of the cloud of data points. Records that could not be entirely modeled (2.35 meters) were ignored. The cubing methods were compared by the paired t-test. Therefore, it was possible to model lower logs (0.10 m to 2.45 m) more accurately than by using the traditional Smalian method, with a one-meter interval between sections

    Determining plane-sweep sampling points in image space using the cross-ratio for image-based depth estimation

    Get PDF

    CURVEFUSION: Reconstructing Thin Structures from RGBD Sequences

    Get PDF
    We introduce CurveFusion, the first approach for high quality scanning of thin structures at interactive rates using a handheld RGBD camera. Thin filament-like structures are mathematically just 1D curves embedded in R3, and integration-based reconstruction works best when depth sequences (from the thin structure parts) are fused using the object's (unknown) curve skeleton. Thus, using the complementary but noisy color and depth channels, CurveFusion first automatically identifies point samples on potential thin structures and groups them into bundles, each being a group of a fixed number of aligned consecutive frames. Then, the algorithm extracts per-bundle skeleton curves using L1 axes, and aligns and iteratively merges the L1 segments from all the bundles to form the final complete curve skeleton. Thus, unlike previous methods, reconstruction happens via integration along a data-dependent fusion primitive, i.e., the extracted curve skeleton. We extensively evaluate CurveFusion on a range of challenging examples, different scanner and calibration settings, and present high fidelity thin structure reconstructions previously just not possible from raw RGBD sequences

    Localisation and tracking of stationary users for extended reality

    Get PDF
    In this thesis, we investigate the topics of localisation and tracking in the context of Extended Reality. In many on-site or outdoor Augmented Reality (AR) applications, users are standing or sitting in one place and performing mostly rotational movements, i.e. stationary. This type of stationary motion also occurs in Virtual Reality (VR) applications such as panorama capture by moving a camera in a circle. Both applications require us to track the motion of a camera in potentially very large and open environments. State-of-the-art methods such as Structure-from-Motion (SfM), and Simultaneous Localisation and Mapping (SLAM), tend to rely on scene reconstruction from significant translational motion in order to compute camera positions. This can often lead to failure in application scenarios such as tracking for seated sport spectators, or stereo panorama capture where the translational movement is small compared to the scale of the environment. To begin with, we investigate the topic of localisation as it is key to providing global context for many stationary applications. To achieve this, we capture our own datasets in a variety of large open spaces including two sports stadia. We then develop and investigate these techniques in the context of these sports stadia using a variety of state-of-the-art localisation approaches. We cover geometry-based methods to handle dynamic aspects of a stadium environment, as well as appearance-based methods, and compare them to a state-of-the-art SfM system to identify the most applicable methods for server-based and on-device localisation. Recent work in SfM has shown that the type of stationary motion that we target can be reliably estimated by applying spherical constraints to the pose estimation. In this thesis, we extend these concepts into a real-time keyframe-based SLAM system for the purposes of AR, and develop a unique data structure for simplifying keyframe selection. We show that our constrained approach can track more robustly in these challenging stationary scenarios compared to state-of-the-art SLAM through both synthetic and real-data tests. In the application of capturing stereo panoramas for VR, this thesis demonstrates the unsuitability of standard SfM techniques for reconstructing these circular videos. We apply and extend recent research in spherically constrained SfM to creating stereo panoramas and compare this with state-of-the-art general SfM in a technical evaluation. With a user study, we show that the motion requirements of our SfM approach are similar to the natural motion of users, and that a constrained SfM approach is sufficient for providing stereoscopic effects when viewing the panoramas in VR
    corecore