7,346 research outputs found

    Stereo image matching using robust estimation and image analysis techniques for DEM generation

    Get PDF
    Digital Elevation Models (DEM) produced by digital photogrammetry workstations are often used as a component in complex Geographic Information Systems (GIS) modeling. Since the accuracy of GIS databases must be within a specified range for appropriate analysis of the information and subsequent decision making, an accurate DEM is needed. Conventional image matching techniques may be classified as either area-based or feature-based methods. These image matching techniques could not overcome the disparity discontinuities problem and only supply a Digital Surface Model (DSM). This means that matching may not occur on the terrain surface, but on the top of man-made objects such as houses, or on the top of the vegetation. In order to get more accurate DEM from overlapping digital aerial images and satellite images, a 3D terrain reconstruction method using compound techniques is proposed. The area-based image matching method is used to supply dense disparities. Image edge detection and texture analysis techniques are used to find houses and tree areas. Both these parts are robustified in order to avoid outlyers. The final DEM comes from the two parts of image matching and image analysis and hence overcomes errors in the DEM caused by matching on tops of trees or man-made objects

    Dictionary-based Tensor Canonical Polyadic Decomposition

    Full text link
    To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images

    High-Throughput System for the Early Quantification of Major Architectural Traits in Olive Breeding Trials Using UAV Images and OBIA Techniques

    Get PDF
    The need for the olive farm modernization have encouraged the research of more efficient crop management strategies through cross-breeding programs to release new olive cultivars more suitable for mechanization and use in intensive orchards, with high quality production and resistance to biotic and abiotic stresses. The advancement of breeding programs are hampered by the lack of efficient phenotyping methods to quickly and accurately acquire crop traits such as morphological attributes (tree vigor and vegetative growth habits), which are key to identify desirable genotypes as early as possible. In this context, an UAV-based high-throughput system for olive breeding program applications was developed to extract tree traits in large-scale phenotyping studies under field conditions. The system consisted of UAV-flight configurations, in terms of flight altitude and image overlaps, and a novel, automatic, and accurate object-based image analysis (OBIA) algorithm based on point clouds, which was evaluated in two experimental trials in the framework of a table olive breeding program, with the aim to determine the earliest date for suitable quantifying of tree architectural traits. Two training systems (intensive and hedgerow) were evaluated at two very early stages of tree growth: 15 and 27 months after planting. Digital Terrain Models (DTMs) were automatically and accurately generated by the algorithm as well as every olive tree identified, independently of the training system and tree age. The architectural traits, specially tree height and crown area, were estimated with high accuracy in the second flight campaign, i.e. 27 months after planting. Differences in the quality of 3D crown reconstruction were found for the growth patterns derived from each training system. These key phenotyping traits could be used in several olive breeding programs, as well as to address some agronomical goals. In addition, this system is cost and time optimized, so that requested architectural traits could be provided in the same day as UAV flights. This high-throughput system may solve the actual bottleneck of plant phenotyping of "linking genotype and phenotype," considered a major challenge for crop research in the 21st century, and bring forward the crucial time of decision making for breeders

    Comparative Analysis of the Semantic Conditions of LoD3 3D Building Model Based on Aerial Photography and Terrestrial Photogrammetry

    Get PDF
    3D modeling of buildings is an important method in mapping and modeling the built environment. In this study, we analyzed the differences between the semantic state of actual buildings and 3D models of LoD3 buildings generated using aerial and terrestrial photogrammetric methods. We also evaluated the accuracy of the visual representation as well as the suitability of the building geometry and texture. Our method involves collecting aerial and terrestrial photographic data and processing it using SFM (structure from motion) technology. The photogrammetric data was then processed using image matching algorithms and 3D reconstruction techniques to generate 3D models of LoD3 buildings. The actual semantic state of the building was identified through field surveys and reference data collection. The 3D building model was successfully modeled from 1201 photos and 19 ground control points. The results of the evaluation of the geometry accuracy test, dimensions and semantic completeness of the 3D model, the use of aerial photographs and terrestrial photogrammetry in LoD3 3D modeling are assessed from the results of the automatic 3D modeling process using SfM (Structure from Motion) technology that produces 3D building models in Level of Detail (LoD) 3 with Root Mean Square Error values <0.5 meters and has semantic completeness of the building in accordance with the original object based on the City Geography Markup Language (CityGML) standard. The facade formed from the modeling almost follows the original model such as doors, windows, hallways, etc

    Image fusion techniqes for remote sensing applications

    Get PDF
    Image fusion refers to the acquisition, processing and synergistic combination of information provided by various sensors or by the same sensor in many measuring contexts. The aim of this survey paper is to describe three typical applications of data fusion in remote sensing. The first study case considers the problem of the Synthetic Aperture Radar (SAR) Interferometry, where a pair of antennas are used to obtain an elevation map of the observed scene; the second one refers to the fusion of multisensor and multitemporal (Landsat Thematic Mapper and SAR) images of the same site acquired at different times, by using neural networks; the third one presents a processor to fuse multifrequency, multipolarization and mutiresolution SAR images, based on wavelet transform and multiscale Kalman filter. Each study case presents also results achieved by the proposed techniques applied to real data

    UAV Oblique Imagery with an Adaptive Micro-Terrain Model for Estimation of Leaf Area Index and Height of Maize Canopy from 3D Point Clouds

    Get PDF
    Leaf area index (LAI) and height are two critical measures of maize crops that are used in ecophysiological and morphological studies for growth evaluation, health assessment, and yield prediction. However, mapping spatial and temporal variability of LAI in fields using handheld tools and traditional techniques is a tedious and costly pointwise operation that provides information only within limited areas. The objective of this study was to evaluate the reliability of mapping LAI and height of maize canopy from 3D point clouds generated from UAV oblique imagery with the adaptive micro-terrain model. The experiment was carried out in a field planted with three cultivars having different canopy shapes and four replicates covering a total area of 48 × 36 m. RGB images in nadir and oblique view were acquired from the maize field at six different time slots during the growing season. Images were processed by Agisoft Metashape to generate 3D point clouds using the structure from motion method and were later processed by MATLAB to obtain clean canopy structure, including height and density. The LAI was estimated by a multivariate linear regression model using crop canopy descriptors derived from the 3D point cloud, which account for height and leaf density distribution along the canopy height. A simulation analysis based on the Sine function effectively demonstrated the micro-terrain model from point clouds. For the ground truth data, a randomized block design with 24 sample areas was used to manually measure LAI, height, N-pen data, and yield during the growing season. It was found that canopy height data from the 3D point clouds has a relatively strong correlation (R2 = 0.89, 0.86, 0.78) with the manual measurement for three cultivars with CH90 . The proposed methodology allows a cost-effective high-resolution mapping of in-field LAI index extraction through UAV 3D data to be used as an alternative to the conventional LAI assessments even in inaccessible regions

    Vision-Aided Autonomous Precision Weapon Terminal Guidance Using a Tightly-Coupled INS and Predictive Rendering Techniques

    Get PDF
    This thesis documents the development of the Vision-Aided Navigation using Statistical Predictive Rendering (VANSPR) algorithm which seeks to enhance the endgame navigation solution possible by inertial measurements alone. The eventual goal is a precision weapon that does not rely on GPS, functions autonomously, thrives in complex 3-D environments, and is impervious to jamming. The predictive rendering is performed by viewpoint manipulation of computer-generated of target objects. A navigation solution is determined by an Unscented Kalman Filter (UKF) which corrects positional errors by comparing camera images with a collection of statistically significant virtual images. Results indicate that the test algorithm is a viable method of aiding an inertial-only navigation system to achieve the precision necessary for most tactical strikes. On 14 flight test runs, the average positional error was 166 feet at endgame, compared with an inertial-only error of 411 feet

    Common Data Fusion Framework : An open-source Common Data Fusion Framework for space robotics

    Get PDF
    Multisensor data fusion plays a vital role in providing autonomous systems with environmental information crucial for reliable functioning. In this article, we summarize the modular structure of the newly developed and released Common Data Fusion Framework and explain how it is used. Sensor data are registered and fused within the Common Data Fusion Framework to produce comprehensive 3D environment representations and pose estimations. The proposed software components to model this process in a reusable manner are presented through a complete overview of the framework, then the provided data fusion algorithms are listed, and through the case of 3D reconstruction from 2D images, the Common Data Fusion Framework approach is exemplified. The Common Data Fusion Framework has been deployed and tested in various scenarios that include robots performing operations of planetary rover exploration and tracking of orbiting satellites

    Digital photogrammetry for visualisation in architecture and archaeology

    Get PDF
    Bibliography: leaves 117-125.The task of recording our physical heritage is of significant importance: our past cannot be divorced from the present and it plays an integral part in the shaping of our future. This applies not only to structures that are hundreds of years old, but relatively more recent architectural structures also require adequate documentation if they are to be preserved for future generations. In recording such structures, the traditional 2D methods are proving inadequate. It will be beneficial to conservationists, archaeologists, researchers, historians and students alike if accurate and extensive digital 3D models of archaeological structures can be generated. This thesis investigates a method of creating such models, using digital photogrammetry. Three different types of model were generated: 1. the simple CAD (Computer Aided Design) model; 2. an amalgamation of 3D line drawings; and 3. an accurate surface model of the building using DSMs (Digital Surface Models) and orthophotos
    corecore