1,457 research outputs found

    Enabling Neural Radiance Fields (NeRF) for Large-scale Aerial Images -- A Multi-tiling Approach and the Geometry Assessment of NeRF

    Full text link
    Neural Radiance Fields (NeRF) offer the potential to benefit 3D reconstruction tasks, including aerial photogrammetry. However, the scalability and accuracy of the inferred geometry are not well-documented for large-scale aerial assets,since such datasets usually result in very high memory consumption and slow convergence.. In this paper, we aim to scale the NeRF on large-scael aerial datasets and provide a thorough geometry assessment of NeRF. Specifically, we introduce a location-specific sampling technique as well as a multi-camera tiling (MCT) strategy to reduce memory consumption during image loading for RAM, representation training for GPU memory, and increase the convergence rate within tiles. MCT decomposes a large-frame image into multiple tiled images with different camera models, allowing these small-frame images to be fed into the training process as needed for specific locations without a loss of accuracy. We implement our method on a representative approach, Mip-NeRF, and compare its geometry performance with threephotgrammetric MVS pipelines on two typical aerial datasets against LiDAR reference data. Both qualitative and quantitative results suggest that the proposed NeRF approach produces better completeness and object details than traditional approaches, although as of now, it still falls short in terms of accuracy.Comment: 9 Figur

    Classification and information structure of the Terrestrial Laser Scanner: methodology for analyzing the registered data of Vila Vella, historic center of Tossa de Mar

    Get PDF
    This paper presents a methodology for an architectural survey, based on the Terrestrial Laser Scanning technology TLS, not as a simple measurement and representation work, but with the purpose understanding the projects being studied, starting from the analysis, as a process of distinction and separation of the parts of a whole, in order to know their principles or elements. As a case study we start from the Vila Vella recording, conducted by the City’s Virtual Modeling Laboratory in 2008, being taken up from the start, in relation to the registration, georeferencing, filtering and handling. Aimed at a later stage of decomposition and composition of data, in terms of floor plan and facades, using semiautomatic classification techniques, for the detection of vegetation as well as the relationship of the planes of the surfaces, leading to reorganize the information from 3D data to 2D and 2.5D, considering information management, as well as the characteristics of the case study presented, in the development of methods for the construction and exploitation of new databases, to be exploited by the Geographic Information Systems and Remote Sensing.Peer Reviewe

    Classification and information structure of the Terrestrial Laser Scanner: methodology for analyzing the registered data of Vila Vella, historic center of Tossa de Mar

    Get PDF
    This paper presents a methodology for an architectural survey, based on the Terrestrial Laser Scanning technology TLS, not as a simple measurement and representation work, but with the purpose understanding the projects being studied, starting from the analysis, as a process of distinction and separation of the parts of a whole, in order to know their principles or elements. As a case study we start from the Vila Vella recording, conducted by the City’s Virtual Modeling Laboratory in 2008, being taken up from the start, in relation to the registration, georeferencing, filtering and handling. Aimed at a later stage of decomposition and composition of data, in terms of floor plan and facades, using semiautomatic classification techniques, for the detection of vegetation as well as the relationship of the planes of the surfaces, leading to reorganize the information from 3D data to 2D and 2.5D, considering information management, as well as the characteristics of the case study presented, in the development of methods for the construction and exploitation of new databases, to be exploited by the Geographic Information Systems and Remote Sensing.Peer Reviewe

    Evaluation of surface defect detection in reinforced concrete bridge decks using terrestrial LiDAR

    Get PDF
    Routine bridge inspections require labor intensive and highly subjective visual interpretation to determine bridge deck surface condition. Light Detection and Ranging (LiDAR) a relatively new class of survey instrument has become a popular and increasingly used technology for providing as-built and inventory data in civil applications. While an increasing number of private and governmental agencies possess terrestrial and mobile LiDAR systems, an understanding of the technology’s capabilities and potential applications continues to evolve. LiDAR is a line-of-sight instrument and as such, care must be taken when establishing scan locations and resolution to allow the capture of data at an adequate resolution for defining features that contribute to the analysis of bridge deck surface condition. Information such as the location, area, and volume of spalling on deck surfaces, undersides, and support columns can be derived from properly collected LiDAR point clouds. The LiDAR point clouds contain information that can provide quantitative surface condition information, resulting in more accurate structural health monitoring. LiDAR scans were collected at three study bridges, each of which displayed a varying degree of degradation. A variety of commercially available analysis tools and an independently developed algorithm written in ArcGIS Python (ArcPy) were used to locate and quantify surface defects such as location, volume, and area of spalls. The results were visual and numerically displayed in a user-friendly web-based decision support tool integrating prior bridge condition metrics for comparison. LiDAR data processing procedures along with strengths and limitations of point clouds for defining features useful for assessing bridge deck condition are discussed. Point cloud density and incidence angle are two attributes that must be managed carefully to ensure data collected are of high quality and useful for bridge condition evaluation. When collected properly to ensure effective evaluation of bridge surface condition, LiDAR data can be analyzed to provide a useful data set from which to derive bridge deck condition information

    Supporting multi-resolution out-of-core rendering of massive LiDAR point clouds through non-redundant data structures

    Get PDF
    This is an Accepted Manuscript of an article published by Taylor & Francis in INTERNATIONAL JOURNAL OF GEOGRAPHICAL INFORMATION SCIENCE on 28 Nov 2018, available at: https://doi.org/10.1080/13658816.2018.1549734[Abstract]: In recent years, the evolution and improvement of LiDAR (Light Detection and Ranging) hardware has increased the quality and quantity of the gathered data, making the storage, processing and management thereof particularly challenging. In this work we present a novel, multi-resolution, out-of-core technique, used for web-based visualization and implemented through a non-redundant, data point organization method, which we call Hierarchically Layered Tiles (HLT), and a tree-like structure called Tile Grid Partitioning Tree (TGPT). The design of these elements is mainly focused on attaining very low levels of memory consumption, disk storage usage and network traffic on both, client and server-side, while delivering high-performance interactive visualization of massive LiDAR point clouds (up to 28 billion points) on multiplatform environments (mobile devices or desktop computers). HLT and TGPT were incorporated and tested in ViLMA (Visualization for LiDAR data using a Multi-resolution Approach), our own web-based visualization software specially designed to work with massive LiDAR point clouds.This research was supported by Xunta de Galicia under the Consolidation Programme of Competitive Reference Groups, co-founded by ERDF funds from the EU [Ref. ED431C 2017/04]; Consolidation Programme of Competitive Research Units, co-founded by ERDF funds from the EU [Ref. R2016/037]; Xunta de Galicia (Centro Singular de InvestigaciĂłn de Galicia accreditation 2016/2019) and the European Union (European Regional Development Fund, ERDF) under Grant [Ref. ED431G/01]; and the Ministry of Economy and Competitiveness of Spain and ERDF funds from the EU [TIN2016-75845-P].Xunta de Galicia; ED431C 2017/04Xunta de Galicia; R2016/037Xunta de Galicia; ED431G/0

    VISUAL SEMANTIC SEGMENTATION AND ITS APPLICATIONS

    Get PDF
    This dissertation addresses the difficulties of semantic segmentation when dealing with an extensive collection of images and 3D point clouds. Due to the ubiquity of digital cameras that help capture the world around us, as well as the advanced scanning techniques that are able to record 3D replicas of real cities, the sheer amount of visual data available presents many opportunities for both academic research and industrial applications. But the mere quantity of data also poses a tremendous challenge. In particular, the problem of distilling useful information from such a large repository of visual data has attracted ongoing interests in the fields of computer vision and data mining. Structural Semantics are fundamental to understanding both natural and man-made objects. Buildings, for example, are like languages in that they are made up of repeated structures or patterns that can be captured in images. In order to find these recurring patterns in images, I present an unsupervised frequent visual pattern mining approach that goes beyond co-location to identify spatially coherent visual patterns, regardless of their shape, size, locations and orientation. First, my approach categorizes visual items from scale-invariant image primitives with similar appearance using a suite of polynomial-time algorithms that have been designed to identify consistent structural associations among visual items, representing frequent visual patterns. After detecting repetitive image patterns, I use unsupervised and automatic segmentation of the identified patterns to generate more semantically meaningful representations. The underlying assumption is that pixels capturing the same portion of image patterns are visually consistent, while pixels that come from different backdrops are usually inconsistent. I further extend this approach to perform automatic segmentation of foreground objects from an Internet photo collection of landmark locations. New scanning technologies have successfully advanced the digital acquisition of large-scale urban landscapes. In addressing semantic segmentation and reconstruction of this data using LiDAR point clouds and geo-registered images of large-scale residential areas, I develop a complete system that simultaneously uses classification and segmentation methods to first identify different object categories and then apply category-specific reconstruction techniques to create visually pleasing and complete scene models

    City-Scaled Digital Documentation: A Comparative Analysis of Digital Documentation Technologies for Recording Architectural Heritage

    Get PDF
    The historic preservation field, enabled by advances in technology, has demonstrated an increased interest in digitizing cultural heritage sites and historic structures. Increases in software capabilities as well as greater affordability has fostered augmented use of digital documentation technologies for architectural heritage applications. Literature establishes four prominent categories of digital documentation tools for preservation: laser scanning, photogrammetry, multimedia geographic information systems (GIS) and three-dimensional modeling. Thoroughly explored through published case studies, the documentation techniques for recording heritage are most often integrated. Scholarly literature does not provide a parallel comparison of the four technologies. A comparative analysis of the four techniques, as presented in this thesis, makes it possible for cities to understand the most applicable technique for their preservation objectives. The thesis analyzes four cases studies that employ applications of the technologies: New Orleans Laser Scanning, University of Maryland Photogrammetry, Historic Columbia Maps Project and the Virtual Historic Savannah Project. Following this, the thesis undertakes a trial of each documentation technology – laser scanning, photogrammetry, multimedia GIS and three-dimensional modeling – utilizing a block on Church Street between Queen and Chalmers streets within the Charleston Historic District. The apparent outcomes of each of the four techniques is analyzed according to a series of parameters including: audience, application, efficacy in recordation, refinement, expertise required, manageability of the product, labor intensity and necessary institutional capacity. A concluding matrix quantifies the capability of each of the technologies in terms of the parameters. This method furnishes a parallel comparison of the techniques and their efficacy in architectural heritage documentation within mid-sized cities

    Developing an interoperable cloud-based visualization workflow for 3D archaeological heritage data. The Palenque 3D Archaeological Atlas

    Get PDF
    In archaeology, 3D data has become ubiquitous, as researchers routinely capture high resolution photogrammetry and LiDAR models and engage in laborious 3D analysis and reconstruction projects at every scale: artifacts, buildings, and entire sites. The raw data and processed 3D models are rarely shared as their computational dependencies leave them unusable by other scholars. In this paper we outline a novel approach for cloud-based collaboration, visualization, analysis, contextualization, and archiving of multi-modal giga-resolution archaeological heritage 3D data. The Palenque 3D Archaeological Atlas builds on an open source WebGL systems that efficiently interlink, merge, present, and contextualize the Big Data collected at the ancient Maya city of Palenque, Mexico, allowing researchers and stakeholders to visualize, access, share, measure, compare, annotate, and repurpose massive complex archaeological datasets from their web-browsers
    • …
    corecore