141 research outputs found

    Automatic Surface Flatness Control using Terrestrial Laser Scanning Data and the 2D Continuous Wavelet Transform

    Get PDF

    Quantifying road roughness: multiresolution and near real-time analysis

    Get PDF
    Road roughness is a key parameter for road construction and for assessing ride quality during the life of paved and unpaved road systems. The quarter-car model (QC model), is a standard mathematical tool for estimating suspension responses and can be used for summative or pointwise analysis of vehicle response to road geometry. In fact, transportation agencies specify roughness requirements as summative values for pavement projects that affect construction practices and contractor pay factors. The International Roughness Index (IRI), a summative statistic of quarter-car suspension response, is widely used to characterize overall roughness profiles of pavement stretches but does not provide sufficient detail about the frequency or spatial distribution of roughness features. This research focuses on two pointwise approaches, continuous roughness maps and wavelets analysis, that both characterize overall roughness and identify localized features and compares these findings with IRI results. Automated algorithms were developed to preform finite difference analysis of point cloud data collected by three-dimensional (3D) stationary terrestrial laser scans of paved and unpaved roads. This resulted in continuous roughness maps that characterized both spatial roughness and localized features. However, to address the computational limitations of finite difference analysis, Fourier and wavelets (discrete and continuous wavelet transform) analyses were conducted on sample profiles from the federal highway administration (FHWA) Long Term Pavement Performance data base. The Fourier analysis was performed by transforming profiles into frequency domain and applying the QC filter to the transformed profile. The filtered profiles are transformed back to spatial domain to inspect the location of high amplitudes in the suspension rate profiles. Finite difference analysis provides suspension responses in spatial domain, on the other hand Fourier analysis can be performed in either frequency or spatial domains only. To describe the location and frequency content of localized features in a profile, wavelet filters were customized to separate the suspension response profiles into sub profiles with known frequency bands. Other advantages of wavelets analysis includes data compression, making inferences from compressed data, and analyzing short profiles (\u3c 7.6 m). The proposed approaches present the basis for developing real-time autonomous algorithms for smoothness based quality control and maintenance

    Critical Evaluation of Time and Cost of Terrestrial Laser Scanning for Construction automated QA analysis

    Get PDF
    Many governments agencies and private owners are demanding up-to-date information related to the current conditions of their facilities. Conventional QA is time-consuming and inefficient; thus, 3D terrestrial laser scanning (TLS) technology has been adopted due to the fast speed and high accuracy in acquiring data. Automated QA analysis can reduce the labour and time needed, reduce the costs, and improve the overall project efficiency. However, very few research efforts have been made to study the benefits of the automated QA analysis in a quantitative way. The process of the TLS-based geometric QA based in an automated data analysis process is described and the data is collected from a case study. It is analysed the performance between the approach with the verity software using a RCP file and the approach with the verity software using a LGS file. The results show for QA time that TLS based QA automated analysis approach with verity software using a LGS file is slightly more efficient ( is 1.464 m2/h) than using verity software using a RCP file is 1.398 m2/h). For QA cost, the results show that TLS based QA automated analysis using TLS approach with verity software using a RCP file and LGS file tended to have the same cost efficiency . With the increase of GFA values the is higher on the purchase than the rental of TLS equipment. Variations of the cost of QA automated analysis are affected by the fluctuation of the labour cost, equipment cost and software cost. New costs can easily input into the proposed model developed in this study for further analysi

    Symmetry in Structural Health Monitoring

    Get PDF
    In this Special Issue on symmetry, we mainly discuss the application of symmetry in various structural health monitoring. For example, considering the health monitoring of a known structure, by obtaining the static or dynamic response of the structure, using different signal processing methods, including some advanced filtering methods, to remove the influence of environmental noise, and extract structural feature parameters to determine the safety of the structure. These damage diagnosis methods can also be effectively applied to various types of infrastructure and mechanical equipment. For this reason, the vibration control of various structures and the knowledge of random structure dynamics should be considered, which will promote the rapid development of the structural health monitoring. Among them, signal extraction and evaluation methods are also worthy of study. The improvement of signal acquisition instruments and acquisition methods improves the accuracy of data. A good evaluation method will help to correctly understand the performance with different types of infrastructure and mechanical equipment

    Monitoring of deformation processes during scientific and technical support of construction

    Get PDF
    With the scientific and technical support of construction, reconstruction of buildings, structures and unique objects, the issue of assessing the development of deformation processes in newly erected elements, in buildings of surrounding buildings, underground utilities falling into the zone of influence of construction and transport infrastructure facilities is very acute. This task is now being successfully solved by combined methods using traditional geodetic methods and laser scanning using high-precision total stations with the ability to scan

    Development of Bridge Information Model (BrIM) for digital twinning and management using TLS technology

    Get PDF
    In the current modern era of information and technology, the concept of Building Information Model (BIM), has made revolutionary changes in different aspects of engineering design, construction, and management of infrastructure assets, especially bridges. In the field of bridge engineering, Bridge Information Model (BrIM), as a specific form of BIM, includes digital twining of the physical asset associated with geometrical inspections and non-geometrical data, which has eliminated the use of traditional paper-based documentation and hand-written reports, enabling professionals and managers to operate more efficiently and effectively. However, concerns remain about the quality of the acquired inspection data and utilizing BrIM information for remedial decisions in a reliable Bridge Management System (BMS) which are still reliant on the knowledge and experience of the involved inspectors, or asset manager, and are susceptible to a certain degree of subjectivity. Therefore, this research study aims not only to introduce the valuable benefits of Terrestrial Laser Scanning (TLS) as a precise, rapid, and qualitative inspection method, but also to serve a novel sliced-based approach for bridge geometric Computer-Aided Design (CAD) model extraction using TLS-based point cloud, and to contribute to BrIM development. Moreover, this study presents a comprehensive methodology for incorporating generated BrIM in a redeveloped element-based condition assessment model while integrating a Decision Support System (DSS) to propose an innovative BMS. This methodology was further implemented in a designed software plugin and validated by a real case study on the Werrington Bridge, a cable-stayed bridge in New South Wales, Australia. The finding of this research confirms the reliability of the TLS-derived 3D model in terms of quality of acquired data and accuracy of the proposed novel slice-based method, as well as BrIM implementation, and integration of the proposed BMS into the developed BrIM. Furthermore, the results of this study showed that the proposed integrated model addresses the subjective nature of decision-making by conducting a risk assessment and utilising structured decision-making tools for priority ranking of remedial actions. The findings demonstrated acceptable agreement in utilizing the proposed BMS for priority ranking of structural elements that require more attention, as well as efficient optimisation of remedial actions to preserve bridge health and safety

    Reconstruction de formes tubulaires à partir de nuages de points : application à l’estimation de la géométrie forestière

    Get PDF
    Les capacités des technologies de télédétection ont augmenté exponentiellement au cours des dernières années : de nouveaux scanners fournissent maintenant une représentation géométrique de leur environnement sous la forme de nuage de points avec une précision jusqu'ici inégalée. Le traitement de nuages de points est donc devenu une discipline à part entière avec ses problématiques propres et de nombreux défis à relever. Le coeur de cette thèse porte sur la modélisation géométrique et introduit une méthode robuste d'extraction de formes tubulaires à partir de nuages de points. Nous avons choisi de tester nos méthodes dans le contexte applicatif difficile de la foresterie pour mettre en valeur la robustesse de nos algorithmes et leur application à des données volumineuses. Nos méthodes intègrent les normales aux points comme information supplémentaire pour atteindre les objectifs de performance nécessaire au traitement de nuages de points volumineux.Cependant, ces normales ne sont généralement pas fournies par les capteurs, il est donc nécessaire de les pré-calculer.Pour préserver la rapidité d'exécution, notre premier développement a donc consisté à présenter une méthode rapide d'estimation de normales. Pour ce faire nous avons approximé localement la géométrie du nuage de points en utilisant des "patchs" lisses dont la taille s'adapte à la complexité locale des nuages de points. Nos travaux se sont ensuite concentrés sur l’extraction robuste de formes tubulaires dans des nuages de points denses, occlus, bruités et de densité inhomogène. Dans cette optique, nous avons développé une variante de la transformée de Hough dont la complexité est réduite grâce aux normales calculées. Nous avons ensuite couplé ces travaux à une proposition de contours actifs indépendants de leur paramétrisation. Cette combinaison assure la cohérence interne des formes reconstruites et s’affranchit ainsi des problèmes liés à l'occlusion, au bruit et aux variations de densité. Notre méthode a été validée en environnement complexe forestier pour reconstruire des troncs d'arbre afin d'en relever les qualités par comparaison à des méthodes existantes. La reconstruction de troncs d'arbre ouvre d'autres questions à mi-chemin entre foresterie et géométrie. La segmentation des arbres d'une placette forestière est l'une d’entre elles. C'est pourquoi nous proposons également une méthode de segmentation conçue pour contourner les défauts des nuages de points forestiers et isoler les différents objets d'un jeu de données. Durant nos travaux nous avons utilisé des approches de modélisation pour répondre à des questions géométriques, et nous les avons appliqué à des problématiques forestières.Il en résulte un pipeline de traitements cohérent qui, bien qu'illustré sur des données forestières, est applicable dans des contextes variés.Abstract : The potential of remote sensing technologies has recently increased exponentially: new sensors now provide a geometric representation of their environment in the form of point clouds with unrivalled accuracy. Point cloud processing hence became a full discipline, including specific problems and many challenges to face. The core of this thesis concerns geometric modelling and introduces a fast and robust method for the extraction of tubular shapes from point clouds. We hence chose to test our method in the difficult applicative context of forestry in order to highlight the robustness of our algorithms and their application to large data sets. Our methods integrate normal vectors as a supplementary geometric information in order to achieve the performance goal necessary for large point cloud processing. However, remote sensing techniques do not commonly provide normal vectors, thus they have to be computed. Our first development hence consisted in the development of a fast normal estimation method on point cloud in order to reduce the computing time on large point clouds. To do so, we locally approximated the point cloud geometry using smooth ''patches`` of points which size adapts to the local complexity of the point cloud geometry. We then focused our work on the robust extraction of tubular shapes from dense, occluded, noisy point clouds suffering from non-homogeneous sampling density. For this objective, we developed a variant of the Hough transform which complexity is reduced thanks to the computed normal vectors. We then combined this research with a new definition of parametrisation-invariant active contours. This combination ensures the internal coherence of the reconstructed shapes and alleviates issues related to occlusion, noise and variation of sampling density. We validated our method in complex forest environments with the reconstruction of tree stems to emphasize its advantages and compare it to existing methods. Tree stem reconstruction also opens new perspectives halfway in between forestry and geometry. One of them is the segmentation of trees from a forest plot. Therefore we also propose a segmentation approach designed to overcome the defects of forest point clouds and capable of isolating objects inside a point cloud. During our work we used modelling approaches to answer geometric questions and we applied our methods to forestry problems. Therefore, our studies result in a processing pipeline adapted to forest point cloud analyses, but the general geometric algorithms we propose can also be applied in various contexts
    corecore