7,114 research outputs found

    3D MODELING of A COMPLEX BUILDING: From MULTI-VIEW IMAGE FUSION to GOOGLE EARTH PUBLICATION

    Get PDF
    This paper presents a pipeline that aims at illustrating the procedure to realize a 3D model of a complex building integrating the UAV and terrestrial images and modifying the 3D model in order to publish to Google Earth in an interactive modality so as to provide better available models for visualization and use. The main steps of the procedure are the optimization of the UAV flight, the integration of the different UAV and ground floor images and the optimization of the model to be published to GE. The case study has been identified in a building, The Eremo di Santa Rosalia Convent in Sicily which hash more staggered elevations and located in the hills of the hinterland and of which, the online platform only indicate the position on Google Maps (GM) and Google Earth (GE) with a photo from above and a non-urban road whose GM path is not corresponding with the GE photo. The process highlights the integration of the models and showcases a workflow for the publication of the combined 3D model to the GE platform

    High-Throughput System for the Early Quantification of Major Architectural Traits in Olive Breeding Trials Using UAV Images and OBIA Techniques

    Get PDF
    The need for the olive farm modernization have encouraged the research of more efficient crop management strategies through cross-breeding programs to release new olive cultivars more suitable for mechanization and use in intensive orchards, with high quality production and resistance to biotic and abiotic stresses. The advancement of breeding programs are hampered by the lack of efficient phenotyping methods to quickly and accurately acquire crop traits such as morphological attributes (tree vigor and vegetative growth habits), which are key to identify desirable genotypes as early as possible. In this context, an UAV-based high-throughput system for olive breeding program applications was developed to extract tree traits in large-scale phenotyping studies under field conditions. The system consisted of UAV-flight configurations, in terms of flight altitude and image overlaps, and a novel, automatic, and accurate object-based image analysis (OBIA) algorithm based on point clouds, which was evaluated in two experimental trials in the framework of a table olive breeding program, with the aim to determine the earliest date for suitable quantifying of tree architectural traits. Two training systems (intensive and hedgerow) were evaluated at two very early stages of tree growth: 15 and 27 months after planting. Digital Terrain Models (DTMs) were automatically and accurately generated by the algorithm as well as every olive tree identified, independently of the training system and tree age. The architectural traits, specially tree height and crown area, were estimated with high accuracy in the second flight campaign, i.e. 27 months after planting. Differences in the quality of 3D crown reconstruction were found for the growth patterns derived from each training system. These key phenotyping traits could be used in several olive breeding programs, as well as to address some agronomical goals. In addition, this system is cost and time optimized, so that requested architectural traits could be provided in the same day as UAV flights. This high-throughput system may solve the actual bottleneck of plant phenotyping of "linking genotype and phenotype," considered a major challenge for crop research in the 21st century, and bring forward the crucial time of decision making for breeders

    Mapping and classification of ecologically sensitive marine habitats using unmanned aerial vehicle (UAV) imagery and object-based image analysis (OBIA)

    Get PDF
    Nowadays, emerging technologies, such as long-range transmitters, increasingly miniaturized components for positioning, and enhanced imaging sensors, have led to an upsurge in the availability of new ecological applications for remote sensing based on unmanned aerial vehicles (UAVs), sometimes referred to as “drones”. In fact, structure-from-motion (SfM) photogrammetry coupled with imagery acquired by UAVs offers a rapid and inexpensive tool to produce high-resolution orthomosaics, giving ecologists a new way for responsive, timely, and cost-effective monitoring of ecological processes. Here, we adopted a lightweight quadcopter as an aerial survey tool and object-based image analysis (OBIA) workflow to demonstrate the strength of such methods in producing very high spatial resolution maps of sensitive marine habitats. Therefore, three different coastal environments were mapped using the autonomous flight capability of a lightweight UAV equipped with a fully stabilized consumer-grade RGB digital camera. In particular we investigated a Posidonia oceanica seagrass meadow, a rocky coast with nurseries for juvenile fish, and two sandy areas showing biogenic reefs of Sabelleria alveolata. We adopted, for the first time, UAV-based raster thematic maps of these key coastal habitats, produced after OBIA classification, as a new method for fine-scale, low-cost, and time saving characterization of sensitive marine environments which may lead to a more effective and efficient monitoring and management of natural resource

    Context-based urban terrain reconstruction from uav-videos for geoinformation applications

    Get PDF
    Urban terrain reconstruction has many applications in areas of civil engineering, urban planning, surveillance and defense research. Therefore the needs of covering ad-hoc demand and performing a close-range urban terrain reconstruction with miniaturized and relatively inexpensive sensor platforms are constantly growing. Using (miniaturized) unmanned aerial vehicles, (M) UAVs, represents one of the most attractive alternatives to conventional large-scale aerial imagery. We cover in this paper a four-step procedure of obtaining georeferenced 3D urban models from video sequences. The four steps of the procedure - orientation, dense reconstruction, urban terrain modeling and geo-referencing - are robust, straight-forward, and nearly fully-automatic. The two last steps - namely, urban terrain modeling from almost-nadir videos and co-registration of models - represent the main contribution of this work and will therefore be covered with more detail. The essential substeps of the third step include digital terrain model (DTM) extraction, segregation of buildings from vegetation, as well as instantiation of building and tree models. The last step is subdivided into quasi-intrasensorial registration of Euclidean reconstructions and intersensorial registration with a geo-referenced orthophoto. Finally, we present reconstruction results from a real data-set and outline ideas for future work

    Refined Equivalent Pinhole Model for Large-scale 3D Reconstruction from Spaceborne CCD Imagery

    Full text link
    In this study, we present a large-scale earth surface reconstruction pipeline for linear-array charge-coupled device (CCD) satellite imagery. While mainstream satellite image-based reconstruction approaches perform exceptionally well, the rational functional model (RFM) is subject to several limitations. For example, the RFM has no rigorous physical interpretation and differs significantly from the pinhole imaging model; hence, it cannot be directly applied to learning-based 3D reconstruction networks and to more novel reconstruction pipelines in computer vision. Hence, in this study, we introduce a method in which the RFM is equivalent to the pinhole camera model (PCM), meaning that the internal and external parameters of the pinhole camera are used instead of the rational polynomial coefficient parameters. We then derive an error formula for this equivalent pinhole model for the first time, demonstrating the influence of the image size on the accuracy of the reconstruction. In addition, we propose a polynomial image refinement model that minimizes equivalent errors via the least squares method. The experiments were conducted using four image datasets: WHU-TLC, DFC2019, ISPRS-ZY3, and GF7. The results demonstrated that the reconstruction accuracy was proportional to the image size. Our polynomial image refinement model significantly enhanced the accuracy and completeness of the reconstruction, and achieved more significant improvements for larger-scale images.Comment: 24 page

    Real-Time Dense 3D Reconstruction from Monocular Video Data Captured by Low-Cost UAVS

    Get PDF
    Real-time 3D reconstruction enables fast dense mapping of the environment which benefits numerous applications, such as navigation or live evaluation of an emergency. In contrast to most real-time capable approaches, our method does not need an explicit depth sensor. Instead, we only rely on a video stream from a camera and its intrinsic calibration. By exploiting the self-motion of the unmanned aerial vehicle (UAV) flying with oblique view around buildings, we estimate both camera trajectory and depth for selected images with enough novel content. To create a 3D model of the scene, we rely on a three-stage processing chain. First, we estimate the rough camera trajectory using a simultaneous localization and mapping (SLAM) algorithm. Once a suitable constellation is found, we estimate depth for local bundles of images using a Multi-View Stereo (MVS) approach and then fuse this depth into a global surfel-based model. For our evaluation, we use 55 video sequences with diverse settings, consisting of both synthetic and real scenes. We evaluate not only the generated reconstruction but also the intermediate products and achieve competitive results both qualitatively and quantitatively. At the same time, our method can keep up with a 30 fps video for a resolution of 768 × 448 pixels

    Automated Volumetric Measurements of Truckloads through Multi-View Photogrammetry and 3D Reconstruction Software

    Get PDF
    Since wood represents an important proportion of the delivered cost, it is important to embrace and implement correct measurement procedures and technologies that provide better wood volume estimates of logs on trucks. Poor measurements not only impact the revenue obtained by haulage contractors and forest companies but also might affect their contractual business relationship. Although laser scanning has become a mature and more affordable technology in the forestry domain, it remains expensive to adopt and implement in real-life operating conditions. In this study, multi-view Structure from Motion (SfM) photogrammetry and commercial 3D image processing software were tested as an innovative and alternative method for automated volumetric measurement of truckloads. The images were collected with a small UAV, which was flown around logging trucks transporting Eucalyptus nitens pulplogs. Photogrammetric commercial software was used to process the images and generate 3D models of each truckload. The levels of accuracy obtained with multi-view SfM photogrammetry and 3D reconstruction obtained in this study were comparable to those reported in previous studies with laser scanning systems for truckloads with similar logs and species. The deviations between the actual and predicted solid volume of logs on trucks ranged between –3.2% and 3.5%, with an average deviation of –0.05%. In absolute terms, the average deviation was only 0.5 m3 or 1.7%. Although several aspects must be addressed for the operational implementation of SfM photogrammetry, the results of this study demonstrate the great potential for this method to be used as a cost-effective tool to aid in the determination of the solid volume of logs on trucks
    • …
    corecore