42 research outputs found
Automatic coregistration of three-dimensional building models with image features
The aim of this article is to investigate methods for\ud
the automatic extraction of the infrared (IR) textures\ud
for the roofs and facades of existing building models.\ud
We focus on the correction of the measured exterior\ud
orientation parameters of the IR camera mounted on\ud
a mobile platform. The developed method is based on\ud
point-to-point matching of the features extracted from\ud
IR images with a wire-frame building model. Firstly,\ud
the extraction of different feature types is studied on\ud
a sample IR image; Förstner and intersection points\ud
are chosen for a representation of the image features.\ud
Secondly, the three-dimensional (3D) building model\ud
is projected into each frame of the IR video sequence\ud
using orientation parameters; only coarse exterior\ud
orientation parameters are known. Then the automatic\ud
co-registration of a 3D building model projection into\ud
the image sequence with image features is carried out.\ud
The matching of a model and extracted features is\ud
applied iteratively, and exterior orientation parameters\ud
are adjusted with least square adjustment. The method\ud
is tested on a dataset of a dense urban area. Finally,\ud
an evaluation of the developed method is presented\ud
with five quality parameters, i.e. efficiency of the\ud
method, completeness and correctness of matching\ud
and extraction
Automatic coregistration of three-dimensional building models with image features
Namen naše raziskave je samodejna določitev tekstur
streh in fasad stavb z infrardečih (IR) posnetkov
za teksturiranje obstoječega trirazsežnega (3D)
modela stavb. Za to je treba izboljšati natančnosti
neposredno izmerjenih parametrov zunanje orientacije
IR-kamere, pritrjene na mobilno platformo. Ta
prispevek opisuje metodo, razvito za izboljšanje
parametrov zunanje orientacije, ki temelji na ujemanju
točk samodejno zaznanih grafičnih gradnikov z
IR-videoposnetka in žičnega modela stavb. Najprej
proučimo zaznavo različnih tipov grafičnih gradnikov
na testnem IR-posnetku. Förstnerjeve in presečiščne
točke izberemo kot primerne grafične gradnike
z
a
predstavitev obravnavanih značilnosti stavb na
IR-posnetku. 3D-model stavb projiciramo na vsak
posamezen posnetek videosekvence ob upoštevanju
orientacijskih parametrov, od katerih so parametri
zunanje orientacije podani s približnimi vrednostmi.
Nato izvedemo samodejno koregistracijo 3D-modela
stavb, projiciranega na videoposnetek, in grafičnih
gradnikov, zaznanih z istega IR-videoposnetka.
Samodejno ujemanje 3D-modela stavb in zaznanih
grafičnih gradnikov poteka iterativno in skupaj z
izravnavo parametrov zunanje orientacije z metodo
najmanjših kvadratov. Razvito metodologijo za
koregistracijo in izravnavo zunanjih orientacijskih
parametrov smo preizkusili na strnjenem poseljenem
območju. Kakovost metodologije ocenimo s petimi
parametri: učinkovitostjo metodologije, popolnostjo
in pravilnostjo algoritmov za ujemanje in zaznavo
grafičnih gradnikov.The aim of this article is to investigate methods for
the automatic extraction of the infrared (IR) textures
for the roofs and facades of existing building models.
We focus on the correction of the measured exterior
orientation parameters of the IR camera mounted on
a mobile platform. The developed method is based on
point-to-point matching of the features extracted from
IR images with a wire-frame building model. Firstly,
the extraction of different feature types is studied on
a sample IR imageFörstner and intersection points
are chosen for a representation of the image features.
Secondly, the three-dimensional (3D) building model
is projected into each frame of the IR video sequence
using orientation parametersonly coarse exterior
orientation parameters are known. Then the automatic
co-registration of a 3D building model projection into
the image sequence with image features is carried out.
The matching of a model and extracted features is
applied iteratively, and exterior orientation parameters
are adjusted with least square adjustment. The method
is tested on a dataset of a dense urban area. Finally,
an evaluation of the developed method is presented
with five quality parameters, i.e. efficiency of the
method, completeness and correctness of matching
and extraction
Backpack System for Capturing 3D Point Clouds of Forests
A 3D model can be useful for inventory management and monitoring of forests. For this task, we created a mobile mapping backpack
system to scan forests. However, capturing this vegetation is difficult due to unreliable GNSS positioning, moving objects caused
by winds, unclear edges, and uneven ground to walk on. We combine LiDAR and multi-spectral camera to capture the environment.
The system is based on Robotic Operating System (ROS), and the scan frames are aligned using SLAM, supported by GNSS
information, if available. We describe in detail our backpack system configurations and development for the forest environment and
outdoor vegetation. We then evaluate the backpack system. Furthermore, we compare two open code SLAM algorithms for ROS,
as well as data collection and laser scan quality between TLS and MLS for forest environment and outdoor vegetation. Finally, we
tested the MicaSense multi-spectral camera for MLS applications and 3D data generation. We conclude, that the backpack shows
to be convenient to use in forest environments. It can be carried easily beside forest paths and rough ground. The system collects
less data than TLS, which proved to be positive
Segmentation of Forest Vegetation Layers Based on Geometric Features Extracted from 3D Point Clouds
The analysis of forest vegetation at lower heights, up to 2 m, is the focus of this work, while previous
approaches primarily focused on trees and their stems. We calculated geometric metrics of point clouds,
based on airborne, unmanned, and mobile laser scanning, to segment different vegetation growths and
densities. Our results show that metrics based on eigenvalues such as planarity, linearity, and sphericity,
as well as normal change rate are useful to differentiate forest layers in our scenario. Volume density is ineffective
here as it is highly dependent on the data collection method. Roughness and principal component
analysis in main direction (PCA1) do not show significant differences in vegetation growth and height. This
research lays the foundation for the usage of geometric metrics to highlight growth and density changes
in 3D vegetation and to differentiate forests into different layers without extensive processing
Integration approach for manual generated single tree crown annotations
For an accurate mapping of forest stands, precise object detection at the individual tree level is necessary. Currently, supervised deep learning models dominate this task. To train a reliable model, it is crucial to have a robust model and an accurate tree crown annotation dataset. The current method for generating these datasets still relies on manual annotation. However, the tree crowns exhibit intricate contours. In some instances, trees intersect with each other, and their spatial arrangement is irregular. This leads to inaccurate and incomplete quantity annotations, including the inclusion of multiple tree crowns in a single annotation. Therefore, this study explores a novel approach that integrates the annotations of multiple annotators for the same region of interest and can reduce annotation inaccuracies due to personal preference and bias
Digitalisierung des Waldes mithilfe von Laserscannerund Mobile Mapping
Aufgrund des Klimawandel und Naturkatastrophen wird es immer wichtiger den Wald zu
dokumentieren, überwachen und natürliche Zusammenhänge zu verstehen. Aus diesem
Grund, soll ein „digitaler Zwilling“ vom Wald erstellt werden. In dieser Arbeit wurde auf
Testflächen die Vegetation eines Waldes und Parks mittels personengetragenen Mobile La-
serscanning aufgenommen. Durch die Berechnung verschiedener Metriken wurde unter-
sucht, inwieweit die Vegetation horizontal und vertikal unterteilt werden kann. Daneben wer-
den Chancen und Einschränkungen der Messmethode angesprochen
Combining airborne images and open data to retrieve knowledge of construction sites
Construction site planning is based on both explicit knowledge, as retrieved from regulations, and implicit knowledge, arising from experience. To retrieve and formalize rules from implicit knowledge, past construction projects can be analyzed. In this paper, we present an image analysis pipeline to retrieve information on past construction sites from airborne images. We fuse machine learning based image analysis with georeferencing and openly available geospatial data to retrieve a detailed description with true dimensions of the construction site at hand
Integrating Crowd-sourced Annotations of Tree Crowns using Markov Random Field and Multispectral Information
Benefiting from advancements in algorithms and computing capabilities, supervised deep learning models offer significant advantages in accurately mapping individual tree canopy cover, which is a fundamental component of forestry management. In contrast to traditional field measurement methods, deep learning models leveraging remote sensing data circumvent access limitations and are more cost-effective. However, the efficiency of models depends on the accuracy of the tree crown annotations, which are often obtained through manual labeling. The intricate features of the tree crown, characterized by irregular contours, overlapping foliage, and frequent shadowing, pose a challenge for annotators. Therefore, this study explores a novel approach that integrates the annotations of multiple annotators for the same region of interest. It further refines the labels by leveraging information extracted from multi-spectral aerial images. This approach aims to reduce annotation inaccuracies caused by personal preference and bias and obtain a more balanced integrated annotation
Fusion of SAR and Multi-spectral Time Series for Determination of Water Table Depth and Lake Area in Peatlands
Peatlands as natural carbon sinks have a major impact on the climate balance and should therefore be monitored and protected. The hydrology of the peatland serves as an indicator of the carbon storage capacity. Hence, we investigate the question how suitable different remote sensing data are for monitoring the size of open water surface and the water table depth (WTD) of a peatland ecosystem. Furthermore, we examine the potential of combining remote sensing data for this purpose. We use C-band synthetic aperture radar (SAR) data from Sentinel-1 and multi-spectral data from Sentinel-2. The radar backscatter σ0, the normalized difference water index (NDWI) and the modified normalized difference water index (MNDWI) are calculated and used for consideration of the WTD and the lake size. For the measurement of the lake size, we implement and investigate the methods: random forest, adaptive thresholding and an analysis according to the Dempster–Shafer theory. Correlations between WTD and the remote sensing data σ0 as well as NDWI are investigated. When looking at the individual data sets the results of our case study show that the VH polarized σ0 data produces the clearest delineation of the peatland lake. However the adaptive thresholding of the weighted fusion image of σ0-VH, σ0-VV and MNDWI, and the random forest algorithm with all three data sets as input proves to be the most suitable for determining the lake area. The correlation coefficients between σ0/NDWI and WTD vary greatly and lie in ranges of low to moderate correlation