201 research outputs found
POINT CLOUD SEGMENTATION AND SEMANTIC ANNOTATION AIDED BY GIS DATA FOR HERITAGE COMPLEXES
Point cloud segmentation is an important first step in categorising a raw point cloud data. This step is necessary in order to better manage the data and generate other derivative products, e.g. 3D GIS or HBIM. The idea presented in this paper involves the use of 2D GIS to help in the segmentation, classification, as well as (early) semantic annotation of the point cloud. This derives from the fact that in the case of heritage complex sites, often times the site has been previously documented in a 2D GIS often with attributes and entities. We used this 2D data to help in the segmentation of a 3D point cloud, with the added benefit of automatic extraction and annotation of the related semantic information directly to the segmented clusters. Results show that the developed algorithm performs well with TLS data of spread out heritage sites, with a median success rate of 93% and an average rate of 86%. While manual intervention is still inevitable in some parts of the workflow (e.g. creation of the base shapefiles and choice of object segmentation order), the developed algorithm has shown to significantly reduce overall processing time and resources required in terms of segmentation and semantic annotation of a point cloud in the case of heritage complexes
Extension of an automatic building extraction technique to airborne laser scanner data containing damaged buildings
Airborne laser scanning systems generate 3-dimensional point clouds of high density and irregular spacing. These data consist of multiple returns coming from terrain, buildings, and vegetation. The major difficulty is the extraction of object categories, usually buildings. In the field of disaster management, the detection of building damages plays an important role. Therefore, the question arises, if damaged buildings can also be detected by a method developed for the automatic extraction of buildings. Another purpose of this study is to extend and test an automatic building detection method developed initially for first echo laser scanner data on data captured in first and last echo. In order to answer these two questions, two institutes share their data and knowledge: the Institute of Photogrammetry and Remote Sensing (IPF, Universität Karlsruhe (TH), Germany) and the MAP-PAGE team (INSA de Strasbourg, France). The used 3D LIDAR data was captured over an area containing undamaged and damaged buildings. The results achieved for every single processing step by applying the original and the extended algorithm to the data are presented, analysed and compared. It is pointed out which buildings can be extracted by which algorithm and why some buildings remain undetecte
INITIAL ASSESSMENT ON THE USE OF STATE-OF-THE-ART NERF NEURAL NETWORK 3D RECONSTRUCTION FOR HERITAGE DOCUMENTATION
In recent decades, photogrammetry has re-emerged as a viable solution for heritage documentation. Developments in various computer vision methods have helped photogrammetry to compete against the laser scanning technology, eventually becoming complementary solutions for the purpose of heritage recording. In the last few years, artificial intelligence (AI) has progressively entered various domains including 3D reconstruction. The Neural Radiance Fields (NeRF) method renders a 3D scene from a series of overlapping images, similar to photogrammetry. However, instead of relying on geometrical relations between the image and world spaces, it uses neural networks to recreate the so-called radiance fields. The result is a significantly faster method of recreating 3D scenes. While not designed to generate 3D models, simple computer graphics methods can be used to convert these recreated radiance fields into the familiar point cloud. In this paper, we implemented the Nerfacto architecture to recreate two instances of heritage objects and then compared them to traditional photogrammetric multi-view stereo (MVS). While the initial hypothesis posits that NeRF is not yet capable to reach the level of accuracy and density achieved by MVS as can be observed in the results, NeRF nevertheless shows a great potential due to its fractionally faster processing speed
Oblique Aerial Photography Tool for Building Inspection and Damage Assessment
Aerial photography has a long history of being employed for mapping purposes due to some of its main advantages, including large
area imaging from above and minimization of field work. Since few years multi-camera aerial systems are becoming a practical
sensor technology across a growing geospatial market, as complementary to the traditional vertical views. Multi-camera aerial
systems capture not only the conventional nadir views, but also tilted images at the same time. In this paper, a particular use of such
imagery in the field of building inspection as well as disaster assessment is addressed. The main idea is to inspect a building from
four cardinal directions by using monoplotting functionalities. The developed application allows to measure building height and
distances and to digitize man-made structures, creating 3D surfaces and building models. The realized GUI is capable of identifying a
building from several oblique points of views, as well as calculates the approximate height of buildings, ground distances and basic
vectorization. The geometric accuracy of the results remains a function of several parameters, namely image resolution, quality of
available parameters (DEM, calibration and orientation values), user expertise and measuring capability
EXTRACTION OF ROAD MARKINGS FROM MLS DATA: A REVIEW
Nowadays, Mobile Laser Scanning (MLS) systems are more and more used to realize extended topographic surveys of roads. Most of them provide for each measured point an attribute corresponding to a return signal strength, so called intensity value. This value enables to easily understand uncolored MLS as it helps to differentiate materials based on their albedo. In a road context, this intensity information allows to distinguish, among others, the main subject of this paper, i.e. road markings. However, this task is challenging. Road marking detection from dense MLS point cloud is widely studied by the research community. It might concern road management and diagnosis, intelligent traffic systems, high-definition maps, location and navigation services. Dense MLS point clouds provided by surveyors are not processed online, they are thus not directly applicable to autonomous driving, but those dense and precise data can be for instance used for the generation of HD reference maps. This paper presents a review of the different processing chains published in the literature. It underlines their contributions and highlights their potential limitations. Finally, a discussion and some suggestions of improvement are given
Investigation of a Combined Surveying and Scanning Device: The Trimble SX10 Scanning Total Station
Surveying fields from geosciences to infrastructure monitoring make use of a wide range of instruments for accurate 3D geometry acquisition. In many cases, the Terrestrial Laser Scanner (TLS) tends to become an optimal alternative to total station measurements thanks to the high point acquisition rate it offers, but also to ever deeper data processing software functionalities. Nevertheless, traditional surveying techniques are valuable in some kinds of projects. Nowadays, a few modern total stations combine their conventional capabilities with those of a laser scanner in a unique device. The recent Trimble SX10 scanning total station is a survey instrument merging high-speed 3D scanning and the capabilities of an image-assisted total station. In this paper this new instrument is introduced and first compared to state-of-the-art image-assisted total stations. The paper also addresses the topic of various laser scanning projects and the delivered point clouds are compared with those of other TLS. Directly and indirectly georeferenced projects have been carried out and are investigated in this paper, and a polygonal traverse is performed through a building. Comparisons with the results delivered by well-established survey instruments show the reliability of the Trimble SX10 for geodetic work as well as for scanning projects
COMPARISON OF POINT CLOUD REGISTRATION ALGORITHMS FOR BETTER RESULT ASSESSMENT – TOWARDS AN OPEN-SOURCE SOLUTION
Terrestrial and airborne laser scanning, photogrammetry and more generally 3D recording techniques are used in a wide range of applications. After recording several individual 3D datasets known in local systems, one of the first crucial processing steps is the registration of these data into a common reference frame. To perform such a 3D transformation, commercial and open source software as well as programs from the academic community are available. Due to some lacks in terms of computation transparency and quality assessment in these solutions, it has been decided to develop an open source algorithm which is presented in this paper. It is dedicated to the simultaneous registration of multiple point clouds as well as their georeferencing. The idea is to use this algorithm as a start point for further implementations, involving the possibility of combining 3D data from different sources. Parallel to the presentation of the global registration methodology which has been employed, the aim of this paper is to confront the results achieved this way with the above-mentioned existing solutions. For this purpose, first results obtained with the proposed algorithm to perform the global registration of ten laser scanning point clouds are presented. An analysis of the quality criteria delivered by two selected software used in this study and a reflexion about these criteria is also performed to complete the comparison of the obtained results. The final aim of this paper is to validate the current efficiency of the proposed method through these comparisons
EVALUATION OF LOW-COST DEPTH SENSORS FOR OUTDOOR APPLICATIONS
Depth information is a key component that allows a computer to reproduce human vision in plenty of applications from manufacturing, to robotics and autonomous driving. The Microsoft Kinect has brought depth sensing to another level resulting in a large number of low cost, small form factor depth sensors. Although these sensors can efficiently produce data over a wide dynamic range of sensing applications and within different environments, most of them are rather suitable for indoor applications. Operating in outdoor areas is a challenge because of undesired illumination, usually strong sunlight or surface scattering, which degrades measurement accuracy. Therefore, after presenting the different working principle of existing depth cameras, our study aims to evaluate where two very recent sensors, the AD-FXTOF1-EBZ and the flexx2, stand towards the issue of outdoor environment. In particular, measurement tests will be performed on different types of materials subjected to various illumination in order to evaluate the potential accuracy of such sensors
SYNTHETIC DATA GENERATION AND TESTING FOR THE SEMANTIC SEGMENTATION OF HERITAGE BUILDINGS
Over the past decade, the use of machine learning and deep learning algorithms to support 3D semantic segmentation of point clouds has significantly increased, and their impressive results has led to the application of such algorithms for the semantic modeling of heritage buildings. Nevertheless, such applications still face several significant challenges, caused in particular by the high number of training data required during training, by the lack of specific data in the heritage building scenarios, and by the time-consuming operations to data collection and annotation. This paper aims to address these challenges by proposing a workflow for synthetic image data generation in heritage building scenarios. Specifically, the procedure allows for the generation of multiple rendered images from various viewpoints based on a 3D model of a building. Additionally, it enables the generation of per-pixel segmentation maps associated with these images. In the first part, the procedure is tested by generating a synthetic simulation of a real-world scenario using the case study of Spedale del Ceppo. In the second part, several experiments are conducted to assess the impact of synthetic data during training. Specifically, three neural network architectures are trained using the generated synthetic images, and their performance in predicting the corresponding real scenarios is evaluated
- …