984 research outputs found

    A survey of sag monitoring methods for power grid transmission lines

    Get PDF
    The transmission line is a fundamental asset in the power grid. The sag condition of the transmission line between two support towers requires accurate real-time monitoring in order to avoid any health and safety hazards or power failure. In this paper, state-of-the-art methods on transmission line sag monitoring are thoroughly reviewed and compared. Both the direct methods that use the direct video or image of the transmission line and the indirect methods that use the relationships between sag and line parameters are investigated. Sag prediction methods and relevant industry standards are also examined. Based on these investigation and examination, future research challenges are outlined and useful recommendations on the choices of sag monitoring methods in different applications are made

    Uses and Challenges of Collecting LiDAR Data from a Growing Autonomous Vehicle Fleet: Implications for Infrastructure Planning and Inspection Practices

    Get PDF
    Autonomous vehicles (AVs) that utilize LiDAR (Light Detection and Ranging) and other sensing technologies are becoming an inevitable part of transportation industry. Concurrently, transportation agencies are increasingly challenged with the management and tracking of large-scale highway asset inventory. LiDAR has become popular among transportation agencies for highway asset management given its advantage over traditional surveying methods. The affordability of LiDAR technology is increasing day by day. Given this, there will be substantial challenges and opportunities for the utilization of big data resulting from the growth of AVs with LiDAR. A proper understanding of the data size generated from this technology will help agencies in making decisions regarding storage, management, and transmission of the data. The original raw data generated from the sensor shrinks a lot after filtering and processing following the Cache county Road Manual and storing into ASPRS recommended (.las) file format. In this pilot study, it is found that while considering the road centerline as the vehicle trajectory larger portion of the data fall into the right of way section compared to the actual vehicle trajectory in Cache County, UT. And there is a positive relation between the data size and vehicle speed in terms of the travel lanes section given the nature of the selected highway environment

    State of research in automatic as-built modelling

    Get PDF
    This is the final version of the article. It first appeared from Elsevier via http://dx.doi.org/10.1016/j.aei.2015.01.001Building Information Models (BIMs) are becoming the official standard in the construction industry for encoding, reusing, and exchanging information about structural assets. Automatically generating such representations for existing assets stirs up the interest of various industrial, academic, and governmental parties, as it is expected to have a high economic impact. The purpose of this paper is to provide a general overview of the as-built modelling process, with focus on the geometric modelling side. Relevant works from the Computer Vision, Geometry Processing, and Civil Engineering communities are presented and compared in terms of their potential to lead to automatic as-built modelling.We acknowledge the support of EPSRC Grant NMZJ/114,DARPA UPSIDE Grant A13–0895-S002, NSF CAREER Grant N. 1054127, European Grant Agreements No. 247586 and 334241. We would also like to thank NSERC Canada, Aecon, and SNC-Lavalin for financially supporting some parts of this research

    A Pipeline of 3D Scene Reconstruction from Point Clouds

    Get PDF
    3D technologies are becoming increasingly popular as their applications in industrial, consumer, entertainment, healthcare, education, and governmental increase in number. According to market predictions, the total 3D modeling and mapping market is expected to grow from 1.1billionin2013to1.1 billion in 2013 to 7.7 billion by 2018. Thus, 3D modeling techniques for different data sources are urgently needed. This thesis addresses techniques for automated point cloud classification and the reconstruction of 3D scenes (including terrain models, 3D buildings and 3D road networks). First, georeferenced binary image processing techniques were developed for various point cloud classifications. Second, robust methods for the pipeline from the original point cloud to 3D model construction were proposed. Third, the reconstruction for the levels of detail (LoDs) of 1-3 (CityGML website) of 3D models was demonstrated. Fourth, different data sources for 3D model reconstruction were studied. The strengths and weaknesses of using the different data sources were addressed. Mobile laser scanning (MLS), unmanned aerial vehicle (UAV) images, airborne laser scanning (ALS), and the Finnish National Land Survey’s open geospatial data sources e.g. a topographic database, were employed as test data. Among these data sources, MLS data from three different systems were explored, and three different densities of ALS point clouds (0.8, 8 and 50 points/m2) were studied. The results were compared with reference data such as an orthophoto with a ground sample distance of 20cm or measured reference points from existing software to evaluate their quality. The results showed that 74.6% of building roofs were reconstructed with the automated process. The resulting building models provided an average height deviation of 15 cm. A total of 6% of model points had a greater than one-pixel deviation from laser points. A total of 2.5% had a deviation of greater than two pixels. The pixel size was determined by the average distance of input laser points. The 3D roads were reconstructed with an average width deviation of 22 cm and an average height deviation of 14 cm. The results demonstrated that 93.4% of building roofs were correctly classified from sparse ALS and that 93.3% of power line points are detected from the six sets of dense ALS data located in forested areas. This study demonstrates the operability of 3D model construction for LoDs of 1-3 via the proposed methodologies and datasets. The study is beneficial to future applications, such as 3D-model-based navigation applications, the updating of 2D topographic databases into 3D maps and rapid, large-area 3D scene reconstruction. 3D-teknologiat ovat tulleet yhä suositummiksi niiden sovellusalojen lisääntyessä teollisuudessa, kuluttajatuotteissa, terveydenhuollossa, koulutuksessa ja hallinnossa. Ennusteiden mukaan 3D-mallinnus- ja -kartoitusmarkkinat kasvavat vuoden 2013 1,1 miljardista dollarista 7,7 miljardiin vuoteen 2018 mennessä. Erilaisia aineistoja käyttäviä 3D-mallinnustekniikoita tarvitaankin yhä enemmän. Tässä väitöskirjatutkimuksessa kehitettiin automaattisen pistepilviaineiston luokittelutekniikoita ja rekonstruoitiin 3D-ympäristöja (maanpintamalleja, rakennuksia ja tieverkkoja). Georeferoitujen binääristen kuvien prosessointitekniikoita kehitettiin useiden pilvipisteaineistojen luokitteluun. Työssä esitetään robusteja menetelmiä alkuperäisestä pistepilvestä 3D-malliin eri CityGML-standardin tarkkuustasoilla. Myös eri aineistolähteitä 3D-mallien rekonstruointiin tutkittiin. Eri aineistolähteiden käytön heikkoudet ja vahvuudet analysoitiin. Testiaineistona käytettiin liikkuvalla keilauksella (mobile laser scanning, MLS) ja ilmakeilauksella (airborne laser scanning, ALS) saatua laserkeilausaineistoja, miehittämättömillä lennokeilla (unmanned aerial vehicle, UAV) otettuja kuvia sekä Maanmittauslaitoksen avoimia aineistoja, kuten maastotietokantaa. Liikkuvalla laserkeilauksella kerätyn aineiston osalta tutkimuksessa käytettiin kolmella eri järjestelmällä saatua dataa, ja kolmen eri tarkkuustason (0,8, 8 ja 50 pistettä/m2) ilmalaserkeilausaineistoa. Tutkimuksessa saatuja tulosten laatua arvioitiin vertaamalla niitä referenssiaineistoon, jona käytettiin ortokuvia (GSD 20cm) ja nykyisissä ohjelmistoissa olevia mitattuja referenssipisteitä. 74,6 % rakennusten katoista saatiin rekonstruoitua automaattisella prosessilla. Rakennusmallien korkeuksien keskipoikkeama oli 15 cm. 6 %:lla mallin pisteistä oli yli yhden pikselin poikkeama laseraineiston pisteisiin verrattuna. 2,5 %:lla oli yli kahden pikselin poikkeama. Pikselikoko määriteltiin kahden laserpisteen välimatkan keskiarvona. Rekonstruoitujen teiden leveyden keskipoikkeama oli 22 cm ja korkeuden keskipoikkeama oli 14 cm. Tulokset osoittavat että 93,4 % rakennuksista saatiin luokiteltua oikein harvasta ilmalaserkeilausaineistosta ja 93,3 % sähköjohdoista saatiin havaittua kuudesta tiheästä metsäalueen ilmalaserkeilausaineistosta. Tutkimus demonstroi 3D-mallin konstruktion toimivuutta tarkkuustasoilla (LoD) 1-3 esitetyillä menetelmillä ja aineistoilla. Tulokset ovat hyödyllisiä kehitettäessä tulevaisuuden sovelluksia, kuten 3D-malleihin perustuvia navigointisovelluksia, topografisten 2D-karttojen ajantasaistamista 3D-kartoiksi, ja nopeaa suurten alueiden 3D-ympäristöjen rekonstruktiota

    Toward knowledge-based automatic 3D spatial topological modeling from LiDAR point clouds for urban areas

    Get PDF
    Le traitement d'un très grand nombre de données LiDAR demeure très coûteux et nécessite des approches de modélisation 3D automatisée. De plus, les nuages de points incomplets causés par l'occlusion et la densité ainsi que les incertitudes liées au traitement des données LiDAR compliquent la création automatique de modèles 3D enrichis sémantiquement. Ce travail de recherche vise à développer de nouvelles solutions pour la création automatique de modèles géométriques 3D complets avec des étiquettes sémantiques à partir de nuages de points incomplets. Un cadre intégrant la connaissance des objets à la modélisation 3D est proposé pour améliorer la complétude des modèles géométriques 3D en utilisant un raisonnement qualitatif basé sur les informations sémantiques des objets et de leurs composants, leurs relations géométriques et spatiales. De plus, nous visons à tirer parti de la connaissance qualitative des objets en reconnaissance automatique des objets et à la création de modèles géométriques 3D complets à partir de nuages de points incomplets. Pour atteindre cet objectif, plusieurs solutions sont proposées pour la segmentation automatique, l'identification des relations topologiques entre les composants de l'objet, la reconnaissance des caractéristiques et la création de modèles géométriques 3D complets. (1) Des solutions d'apprentissage automatique ont été proposées pour la segmentation sémantique automatique et la segmentation de type CAO afin de segmenter des objets aux structures complexes. (2) Nous avons proposé un algorithme pour identifier efficacement les relations topologiques entre les composants d'objet extraits des nuages de points afin d'assembler un modèle de Représentation Frontière. (3) L'intégration des connaissances sur les objets et la reconnaissance des caractéristiques a été développée pour inférer automatiquement les étiquettes sémantiques des objets et de leurs composants. Afin de traiter les informations incertitudes, une solution de raisonnement automatique incertain, basée sur des règles représentant la connaissance, a été développée pour reconnaître les composants du bâtiment à partir d'informations incertaines extraites des nuages de points. (4) Une méthode heuristique pour la création de modèles géométriques 3D complets a été conçue en utilisant les connaissances relatives aux bâtiments, les informations géométriques et topologiques des composants du bâtiment et les informations sémantiques obtenues à partir de la reconnaissance des caractéristiques. Enfin, le cadre proposé pour améliorer la modélisation 3D automatique à partir de nuages de points de zones urbaines a été validé par une étude de cas visant à créer un modèle de bâtiment 3D complet. L'expérimentation démontre que l'intégration des connaissances dans les étapes de la modélisation 3D est efficace pour créer un modèle de construction complet à partir de nuages de points incomplets.The processing of a very large set of LiDAR data is very costly and necessitates automatic 3D modeling approaches. In addition, incomplete point clouds caused by occlusion and uneven density and the uncertainties in the processing of LiDAR data make it difficult to automatic creation of semantically enriched 3D models. This research work aims at developing new solutions for the automatic creation of complete 3D geometric models with semantic labels from incomplete point clouds. A framework integrating knowledge about objects in urban scenes into 3D modeling is proposed for improving the completeness of 3D geometric models using qualitative reasoning based on semantic information of objects and their components, their geometric and spatial relations. Moreover, we aim at taking advantage of the qualitative knowledge of objects in automatic feature recognition and further in the creation of complete 3D geometric models from incomplete point clouds. To achieve this goal, several algorithms are proposed for automatic segmentation, the identification of the topological relations between object components, feature recognition and the creation of complete 3D geometric models. (1) Machine learning solutions have been proposed for automatic semantic segmentation and CAD-like segmentation to segment objects with complex structures. (2) We proposed an algorithm to efficiently identify topological relationships between object components extracted from point clouds to assemble a Boundary Representation model. (3) The integration of object knowledge and feature recognition has been developed to automatically obtain semantic labels of objects and their components. In order to deal with uncertain information, a rule-based automatic uncertain reasoning solution was developed to recognize building components from uncertain information extracted from point clouds. (4) A heuristic method for creating complete 3D geometric models was designed using building knowledge, geometric and topological relations of building components, and semantic information obtained from feature recognition. Finally, the proposed framework for improving automatic 3D modeling from point clouds of urban areas has been validated by a case study aimed at creating a complete 3D building model. Experiments demonstrate that the integration of knowledge into the steps of 3D modeling is effective in creating a complete building model from incomplete point clouds

    Recommendations for the quantitative analysis of landslide risk

    Get PDF
    This paper presents recommended methodologies for the quantitative analysis of landslide hazard, vulnerability and risk at different spatial scales (site-specific, local, regional and national), as well as for the verification and validation of the results. The methodologies described focus on the evaluation of the probabilities of occurrence of different landslide types with certain characteristics. Methods used to determine the spatial distribution of landslide intensity, the characterisation of the elements at risk, the assessment of the potential degree of damage and the quantification of the vulnerability of the elements at risk, and those used to perform the quantitative risk analysis are also described. The paper is intended for use by scientists and practising engineers, geologists and other landslide experts.JRC.H.5-Land Resources Managemen

    Atmospheric propagation issues relevant to optical communications

    Get PDF
    Atmospheric propagation issues relevant to space-to-ground optical communications for near-earth applications are studied. Propagation effects, current optical communication activities, potential applications, and communication techniques are surveyed. It is concluded that a direct-detection space-to-ground link using redundant receiver sites and temporal encoding is likely to be employed to transmit earth-sensing satellite data to the ground some time in the future. Low-level, long-term studies of link availability, fading statistics, and turbulence climatology are recommended to support this type of application

    Installation Quality Inspection for High Formwork Using Terrestrial Laser Scanning Technology

    Get PDF
    Current inspection for installation quality of high formwork is conducted by site managers based on personal experience and intuition. This non-systematic inspection is laborious and it is difficult to provide accurate dimension measurements for high formwork. The study proposed a method that uses terrestrial laser scanning (TLS) technology to collect the full range measurements of a high formwork and develop a genetic algorithm (GA) optimized artificial neutral network (ANN) model to improve measurement accuracy. First, a small-scale high formwork model set was established in the lab for scanning. Then, the collected multi-scan data were registered in a common reference system, and RGB value and symmetry of the structure were used to extract poles and tubes of the model set, removing all irrelevant data. Third, all the cross points of poles and tubes were generated. Next, the model set positioned on the moving equipment was scanned at different specified locations in order to collect sufficient data to develop an GA-ANN model that can generate accurate estimates of the point coordinates so that the accuracy of the dimension measurements can be achieved at the millimetre level. Validation experiments were conducted both on another model set and a real high formwork. The successful applications suggest that the proposed method is superior to other common techniques for obtaining the required data necessary for accurately measuring the overall structure dimensions, regarding data accuracy, cost and time. The study proposed an effective method for installation quality inspection for high formwork, especially when the inspection cannot be properly operated due to cost factors associated with common inspection methods

    Adaptive Methods for Point Cloud and Mesh Processing

    Get PDF
    Point clouds and 3D meshes are widely used in numerous applications ranging from games to virtual reality to autonomous vehicles. This dissertation proposes several approaches for noise removal and calibration of noisy point cloud data and 3D mesh sharpening methods. Order statistic filters have been proven to be very successful in image processing and other domains as well. Different variations of order statistics filters originally proposed for image processing are extended to point cloud filtering in this dissertation. A brand-new adaptive vector median is proposed in this dissertation for removing noise and outliers from noisy point cloud data. The major contributions of this research lie in four aspects: 1) Four order statistic algorithms are extended, and one adaptive filtering method is proposed for the noisy point cloud with improved results such as preserving significant features. These methods are applied to standard models as well as synthetic models, and real scenes, 2) A hardware acceleration of the proposed method using Microsoft parallel pattern library for filtering point clouds is implemented using multicore processors, 3) A new method for aerial LIDAR data filtering is proposed. The objective is to develop a method to enable automatic extraction of ground points from aerial LIDAR data with minimal human intervention, and 4) A novel method for mesh color sharpening using the discrete Laplace-Beltrami operator is proposed. Median and order statistics-based filters are widely used in signal processing and image processing because they can easily remove outlier noise and preserve important features. This dissertation demonstrates a wide range of results with median filter, vector median filter, fuzzy vector median filter, adaptive mean, adaptive median, and adaptive vector median filter on point cloud data. The experiments show that large-scale noise is removed while preserving important features of the point cloud with reasonable computation time. Quantitative criteria (e.g., complexity, Hausdorff distance, and the root mean squared error (RMSE)), as well as qualitative criteria (e.g., the perceived visual quality of the processed point cloud), are employed to assess the performance of the filters in various cases corrupted by different noisy models. The adaptive vector median is further optimized for denoising or ground filtering aerial LIDAR data point cloud. The adaptive vector median is also accelerated on multi-core CPUs using Microsoft Parallel Patterns Library. In addition, this dissertation presents a new method for mesh color sharpening using the discrete Laplace-Beltrami operator, which is an approximation of second order derivatives on irregular 3D meshes. The one-ring neighborhood is utilized to compute the Laplace-Beltrami operator. The color for each vertex is updated by adding the Laplace-Beltrami operator of the vertex color weighted by a factor to its original value. Different discretizations of the Laplace-Beltrami operator have been proposed for geometrical processing of 3D meshes. This work utilizes several discretizations of the Laplace-Beltrami operator for sharpening 3D mesh colors and compares their performance. Experimental results demonstrated the effectiveness of the proposed algorithms

    Automatic Retrieval of Skeletal Structures of Trees from Terrestrial Laser Scanner Data

    Get PDF
    Research on forest ecosystems receives high attention, especially nowadays with regard to sustainable management of renewable resources and the climate change. In particular, accurate information on the 3D structure of a tree is important for forest science and bioclimatology, but also in the scope of commercial applications. Conventional methods to measure geometric plant features are labor- and time-intensive. For detailed analysis, trees have to be cut down, which is often undesirable. Here, Terrestrial Laser Scanning (TLS) provides a particularly attractive tool because of its contactless measurement technique. The object geometry is reproduced as a 3D point cloud. The objective of this thesis is the automatic retrieval of the spatial structure of trees from TLS data. We focus on forest scenes with comparably high stand density and with many occlusions resulting from it. The varying level of detail of TLS data poses a big challenge. We present two fully automatic methods to obtain skeletal structures from scanned trees that have complementary properties. First, we explain a method that retrieves the entire tree skeleton from 3D data of co-registered scans. The branching structure is obtained from a voxel space representation by searching paths from branch tips to the trunk. The trunk is determined in advance from the 3D points. The skeleton of a tree is generated as a 3D line graph. Besides 3D coordinates and range, a scan provides 2D indices from the intensity image for each measurement. This is exploited in the second method that processes individual scans. Furthermore, we introduce a novel concept to manage TLS data that facilitated the researchwork. Initially, the range image is segmented into connected components. We describe a procedure to retrieve the boundary of a component that is capable of tracing inner depth discontinuities. A 2D skeleton is generated from the boundary information and used to decompose the component into sub components. A Principal Curve is computed from the 3D point set that is associated with a sub component. The skeletal structure of a connected component is summarized as a set of polylines. Objective evaluation of the results remains an open problem because the task itself is ill-defined: There exists no clear definition of what the true skeleton should be w.r.t. a given point set. Consequently, we are not able to assess the correctness of the methods quantitatively, but have to rely on visual assessment of results and provide a thorough discussion of the particularities of both methods. We present experiment results of both methods. The first method efficiently retrieves full skeletons of trees, which approximate the branching structure. The level of detail is mainly governed by the voxel space and therefore, smaller branches are reproduced inadequately. The second method retrieves partial skeletons of a tree with high reproduction accuracy. The method is sensitive to noise in the boundary, but the results are very promising. There are plenty of possibilities to enhance the method’s robustness. The combination of the strengths of both presented methods needs to be investigated further and may lead to a robust way to obtain complete tree skeletons from TLS data automatically.Die Erforschung des ÖkosystemsWald spielt gerade heutzutage im Hinblick auf den nachhaltigen Umgang mit nachwachsenden Rohstoffen und den Klimawandel eine große Rolle. Insbesondere die exakte Beschreibung der dreidimensionalen Struktur eines Baumes ist wichtig für die Forstwissenschaften und Bioklimatologie, aber auch im Rahmen kommerzieller Anwendungen. Die konventionellen Methoden um geometrische Pflanzenmerkmale zu messen sind arbeitsintensiv und zeitaufwändig. Für eine genaue Analyse müssen Bäume gefällt werden, was oft unerwünscht ist. Hierbei bietet sich das Terrestrische Laserscanning (TLS) als besonders attraktives Werkzeug aufgrund seines kontaktlosen Messprinzips an. Die Objektgeometrie wird als 3D-Punktwolke wiedergegeben. Basierend darauf ist das Ziel der Arbeit die automatische Bestimmung der räumlichen Baumstruktur aus TLS-Daten. Der Fokus liegt dabei auf Waldszenen mit vergleichsweise hoher Bestandesdichte und mit zahlreichen daraus resultierenden Verdeckungen. Die Auswertung dieser TLS-Daten, die einen unterschiedlichen Grad an Detailreichtum aufweisen, stellt eine große Herausforderung dar. Zwei vollautomatische Methoden zur Generierung von Skelettstrukturen von gescannten Bäumen, welche komplementäre Eigenschaften besitzen, werden vorgestellt. Bei der ersten Methode wird das Gesamtskelett eines Baumes aus 3D-Daten von registrierten Scans bestimmt. Die Aststruktur wird von einer Voxelraum-Repräsentation abgeleitet indem Pfade von Astspitzen zum Stamm gesucht werden. Der Stamm wird im Voraus aus den 3D-Punkten rekonstruiert. Das Baumskelett wird als 3D-Liniengraph erzeugt. Für jeden gemessenen Punkt stellt ein Scan neben 3D-Koordinaten und Distanzwerten auch 2D-Indizes zur Verfügung, die sich aus dem Intensitätsbild ergeben. Bei der zweiten Methode, die auf Einzelscans arbeitet, wird dies ausgenutzt. Außerdem wird ein neuartiges Konzept zum Management von TLS-Daten beschrieben, welches die Forschungsarbeit erleichtert hat. Zunächst wird das Tiefenbild in Komponenten aufgeteilt. Es wird eine Prozedur zur Bestimmung von Komponentenkonturen vorgestellt, die in der Lage ist innere Tiefendiskontinuitäten zu verfolgen. Von der Konturinformation wird ein 2D-Skelett generiert, welches benutzt wird um die Komponente in Teilkomponenten zu zerlegen. Von der 3D-Punktmenge, die mit einer Teilkomponente assoziiert ist, wird eine Principal Curve berechnet. Die Skelettstruktur einer Komponente im Tiefenbild wird als Menge von Polylinien zusammengefasst. Die objektive Evaluation der Resultate stellt weiterhin ein ungelöstes Problem dar, weil die Aufgabe selbst nicht klar erfassbar ist: Es existiert keine eindeutige Definition davon was das wahre Skelett in Bezug auf eine gegebene Punktmenge sein sollte. Die Korrektheit der Methoden kann daher nicht quantitativ beschrieben werden. Aus diesem Grund, können die Ergebnisse nur visuell beurteiltwerden. Weiterhinwerden die Charakteristiken beider Methoden eingehend diskutiert. Es werden Experimentresultate beider Methoden vorgestellt. Die erste Methode bestimmt effizient das Skelett eines Baumes, welches die Aststruktur approximiert. Der Detaillierungsgrad wird hauptsächlich durch den Voxelraum bestimmt, weshalb kleinere Äste nicht angemessen reproduziert werden. Die zweite Methode rekonstruiert Teilskelette eines Baums mit hoher Detailtreue. Die Methode reagiert sensibel auf Rauschen in der Kontur, dennoch sind die Ergebnisse vielversprechend. Es gibt eine Vielzahl von Möglichkeiten die Robustheit der Methode zu verbessern. Die Kombination der Stärken von beiden präsentierten Methoden sollte weiter untersucht werden und kann zu einem robusteren Ansatz führen um vollständige Baumskelette automatisch aus TLS-Daten zu generieren
    • …
    corecore