6 research outputs found

    Sensor Fusion of Vision-RTK and Azure Kinect for Outdoor AR Applications

    No full text
    Augmented Reality (AR) has recently gained in attractiveness in the world of research, thanks to the tremendous progress made in computer vision and artificial intelligence. At the moment, AR is mostly restricted to indoor applications, namely due to the lack of structures and features in outdoor environment, which are essential for computer vision algorithms to keep track of user’s orientation and position. Therefore, in this master thesis, we designed a new platform made of cutting edge technology sensors for AR, which could pave the way for the conquest of the outside AR world. The novelty of this platform is the fusion of 2 Global Navigation Satellite System Real-time kinematic (GNSS RTK) receivers and 1 Inertial Motion Unit (IMU), which have never been assembled in such a compact and lightweight format before. This sensor fusion allows to retrieve the pose of the platform with an accuracy of 1 centimeter for the position and 1 degree for the orientation. Based on these values and a Matlab script, the attachment of a laser pointer to the platform is simulated and the projection error of AR contents on a planar surface is obtained. At a maximal range of 5 meters, the laser pointer is guaranteed to hit a planar surface having a width of at least 10[cm], which indicates a good suitability of the platform for outdoor AR applications.I could be particularly useful in the field of civil engineering in construction sites, where it could be used to visualise the buried utility services, i.e. telephonic cable or water pipeline network

    Feasibility study of using virtual reality for interactive and immersive semantic segmentation of single tree stems

    No full text
    Forest digitisation is one of the next major challenges to be tackled in the forestry domain. As a consequence of tremendous advances in 3D scanning technologies, broad areas of forest can be mapped in 3D dramatically faster than 20 years ago. Consequently, capturing 3D forest point clouds with the use of 3D sensing technologies – such as lidar – is becoming predominant in the field of forestry. However, the processing of 3D point clouds to bring semantics to the 3D forestry data – e.g. by linking them with ecological values – has not seen similar advancements. Therefore, in this paper we consider a novel approach based on the use of VR (Virtual reality) as a potential solution for deriving biodiversity from 3D point clouds acquired in the field. That is, we developed a VR labelling application to visualise forest point clouds and to perform the segmentation of several biodiversity components on tree stems e.g., mosses, lichens and bark pockets. Furthermore, the VR segmented point cloud was analysed with standard accuracy and precision metrics. Namely, the proposed VR application managed to achieve an IoU (Intersection over Union) rate value of 98.74% for the segmentation of bark pockets and resp. 93.71% for the moss and lichen classes. These encouraging results reinforce the potential for the proposed VR labelling method for other purposes in the future, for example for AI (Artificial Intelligence) training dataset creation.ISSN:1682-1750ISSN:2194-9034ISSN:1682-177

    Evaluation of Azure Kinect Derived Point Clouds to Determine the Presence of Microhabitats on Single Trees Based on the Swiss Standard Parameters

    No full text
    In the last few years, a number of low-cost 3D scanning sensors have been developed to reconstruct the real-world environment. These sensors were primarily designed for indoor use, making them highly unpredictable in terms of their performance and accuracy when used outdoors. The Azure Kinect belongs to this category of low-cost 3D scanners and has been successfully employed in outdoor applications. In addition, this sensor possesses features such as portability and live visualization during data acquisition that makes it extremely interesting in the field of forestry. In the context of forest inventory, these advantages would allow to facilitate the task of tree parameters acquisition in an efficient manner. In this paper, a protocol was established for the acquisition of 3D data in forests using the Azure Kinect. A comparison of the resulting point cloud was performed against photogrammetry. Results demonstrated that the Azure Kinect point cloud was of suitable quality for extracting tree parameters such as diameter at breast height (DBH, with a standard deviation of 2.2cm). Furthermore, the quality of the visual and geometric information of the point cloud was evaluated in terms of its feasibility to identify microhabitats. Microhabitats represent valuable information on forest biodiversity and are included in Swiss forest inventory measurements. In total, five different microhabitats were identified in the Azure Kinect Point cloud. The measurements were therefore comparable to sensors such as terrestrial laser scanning and photogrammetry. Therefore, we argue that the Azure Kinect point cloud can efficiently identify certain types of microhabitats and this study presents a first approach of its application in forest inventories.ISSN:1682-1750ISSN:2194-9034ISSN:1682-177

    SemDpray: Virtual reality as-is semantic information labeling tool for 3D spatial data

    No full text
    Capturing the as-is status of buildings in the form of 3D spatial data has been becoming more accurate and efficient, but the act of extracting from it as-is information has not seen similar advancements. State-of-the-art practice requires experts to manually interact with the spatial data in a laborious and time-consuming process. We propose Semantic Spray (Semspray), a Virtual Reality (VR) application that provides users with intuitive tools to produce semantic information on as-is 3D spatial data of buildings. The goal is to perform this task accurately and more efficiently by allowing users to interact with the data at different scales

    Evaluating state-of-the-art 3D scanning methods for stem-level biodiversity inventories in forests

    No full text
    Monitoring biodiversity in forests is crucial for their management and preservation, especially in light of increasing climatic disturbances. However, traditional methods of surveying forest biodiversity, such as the inventory of tree-related microhabitats (TreMs), are costly and time-consuming. For many years, terrestrial laser scanning (TLS) was the main method for producing highly accurate 3D models of forests. However, with recent advancements in 3D scanning technologies, there are now numerous alternatives available on the market. The aim of this study was to evaluate the performance of four different 3D data acquisition methods, i.e. close-range photogrammetry (CRP), fish-eye photogrammetry (FEP), mobile laser scanning (MLS), and mixed reality depth camera (MRDC), in terms of accuracy and ability to measure biodiversity (TreMs) at tree-stem level, in comparison to TLS. Analysis was performed based on geometric accuracy and point neighbourhood relevance. CRP was the most accurate alternative to TLS for TreM measurement with a median error of 1.5 cm, while FEP provided a good balance between accuracy (median error 1.4 cm) and speed of data collection. Although MLS showed promising results (median error 1.6 cm), noise in the point cloud limited its ability to identify TreMs. MRDC, on the other hand, had lower quality (median error 3.6 cm) and lower point density, making it unsuitable for TreM segmentation. Nevertheless, the study demonstrated the feasibility of augmenting the real world with virtual content at single-tree-stem level using mixed reality technology. Overall, the 3D scanning technologies presented hold great promise for recording the evolution of biodiversity at stem level.ISSN:0303-2434ISSN:1872-826XISSN:1569-843

    A review of software solutions to process ground-based point clouds in forest applications

    No full text
    Purpose of ReviewIn recent years, the use of 3D point clouds in silviculture and forest ecology has seen a large increase in interest. With the development of novel 3D capture technologies, such as laser scanning, an increasing number of algorithms have been developed in parallel to process 3D point cloud data into more tangible results for forestry applications. From this variety of available algorithms, it can be challenging for users to decide which to apply to fulfil their goals best. Here, we present an extensive overview of point cloud acquisition and processing tools as well as their outputs for precision forestry. We then provide a comprehensive database of 24 algorithms for processing forest point clouds obtained using close-range techniques, specifically ground-based platforms.Recent FindingsOf the 24 solutions identified, 20 are open-source, two are free software, and the remaining two are commercial products. The compiled database of solutions, along with the corresponding technical guides on installation and general use, is accessible on a web-based platform as part of the COST Action 3DForEcoTech. The database may serve the community as a single source of information to select a specific software/algorithm that works for their requirements.SummaryWe conclude that the development of various algorithms for processing point clouds offers powerful tools that can considerably impact forest inventories in the future, although we note the necessity of creating a standardisation paradigm
    corecore