131 research outputs found

    View generated database

    Get PDF
    This document represents the final report for the View Generated Database (VGD) project, NAS7-1066. It documents the work done on the project up to the point at which all project work was terminated due to lack of project funds. The VGD was to provide the capability to accurately represent any real-world object or scene as a computer model. Such models include both an accurate spatial/geometric representation of surfaces of the object or scene, as well as any surface detail present on the object. Applications of such models are numerous, including acquisition and maintenance of work models for tele-autonomous systems, generation of accurate 3-D geometric/photometric models for various 3-D vision systems, and graphical models for realistic rendering of 3-D scenes via computer graphics

    A review on deep learning techniques for 3D sensed data classification

    Get PDF
    Over the past decade deep learning has driven progress in 2D image understanding. Despite these advancements, techniques for automatic 3D sensed data understanding, such as point clouds, is comparatively immature. However, with a range of important applications from indoor robotics navigation to national scale remote sensing there is a high demand for algorithms that can learn to automatically understand and classify 3D sensed data. In this paper we review the current state-of-the-art deep learning architectures for processing unstructured Euclidean data. We begin by addressing the background concepts and traditional methodologies. We review the current main approaches including; RGB-D, multi-view, volumetric and fully end-to-end architecture designs. Datasets for each category are documented and explained. Finally, we give a detailed discussion about the future of deep learning for 3D sensed data, using literature to justify the areas where future research would be most valuable.Comment: 25 pages, 9 figures. Review pape

    Filtering Surfaces in Surveys with Multiple Overlapping: Sagrada Familia

    Get PDF
    The heritage survey with the Terrestrial Laser Scanner (TLS) allows the document of the geometry of the building and to constitute a 3D point cloud as a register of its conservation state. When complex buildings with architectural and sculptural elements are scanned, there are a lot of captured data that is not valid, because of the instrumental error and foreign elements of the buildings. For that reason, the point cloud must be cleaned with the objective to obtain a final model from which different products could be created, such as plans, technical documents and 3D models to print. For this cleaning process, in this article with the case of study is Antoni Gaudi’s Sagrada Familia (Fachada del Nacimiento), we propose a methodology based on applying some filers, considering the fact that more than 3000 positions were realized, 750 of them belong to the same facade with positions that have a lot of overlapping data. Therefore, in a same zone of the building there is data scanned from multiple positions in different ways, so we can find there any kind of error, such as the noise from boundary effects, glass flections and mobile objects, and scans realized from a scissor lift, that have been previously validated. Different point cloud filtering processes have been studied, through the point cloud itself (position by position and with a unitary cloud), and by meshing it. Every process requires the knowledge of how the scan was realized, what type of error dominates in each zone is analyzed. Therefore, each filtering option accomplish the requirements established after the analysis.Postprint (author's final draft

    Insights into Rockfall from Constant 4D Monitoring

    Get PDF
    Current understanding of the nature of rockfall and their controls stems from the capabilities of slope monitoring. These capabilities are fundamentally limited by the frequency and resolution of data that can be captured. Various assumptions have therefore arisen, including that the mechanisms that underlie rockfall are instantaneous. Clustering of rockfall across rock faces and sequencing through time have been observed, sometimes with an increase in pre-failure deformation and pre-failure rockfall activity prior to catastrophic failure. An inherent uncertainty, however, lies in whether the behaviour of rockfall monitored over much shorter time intervals (Tint) is consistent with that previously monitored at monthly intervals, including observed failure mechanisms, their response to external drivers, and pre-failure deformation. To address the limitations of previous studies on this topic, 8 987 terrestrial laser scans have been acquired over 10 months from continuous near-real time monitoring of an actively failing coastal rock slope (Tint = 0.5 h). A workflow has been devised that automatically resolves depth changes at the surface to 0.03 m. This workflow filters points with high positional uncertainty and detects change in 3D, with both approaches tailored to natural rock faces, which commonly feature sharp edges and partially occluded areas. Analysis of the resulting rockfall inventory, which includes > 180 000 detachments, shows that the proportion of rockfall < 0.1 m3 increases with more frequent surveys for Tint < ca. 100 h, but this trend does not continue for surface comparison over longer time intervals. Therefore, and advantageously, less frequent surveys will derive the same rockfall magnitude-frequency distribution if captured at ca. 100 h intervals as compared to one month or even longer intervals. The shape and size of detachments shows that they are more shallow and smaller than observable rock mass structure, but appear to be limited in size and extent by jointing. Previously explored relationships between rockfall timing and environmental and marine conditions do not appear to apply to this inventory, however, significant relationships between rockfall and rainfall, temperature gradient and tides are demonstrated over short timescales. Pre-failure deformation and rockfall activity is observed in the footprint of incipient rockfall. Rockfall activity occurs predominantly within the same ca. 100 h timescale observed in the size-distribution analysis, and accelerated deformation is common for the largest rockfall during the final 2 h before block detachment. This study provides insights into the nature and development of rockfall during the period prior to detachment, and the controls upon it. This holds considerable implications for our understanding of rockfall and the improvement of future rockfall monitoring

    Visualization and inspection of the geometry of particle packings

    Get PDF
    Gegenstand dieser Dissertation ist die Entwicklung von effizienten Verfahren zur Visualisierung und Inspektion der Geometrie von Partikelmischungen. Um das Verhalten der Simulation für die Partikelmischung besser zu verstehen und zu überwachen, sollten nicht nur die Partikel selbst, sondern auch spezielle von den Partikeln gebildete Bereiche, die den Simulationsfortschritt und die räumliche Verteilung von Hotspots anzeigen können, visualisiert werden können. Dies sollte auch bei großen Packungen mit Millionen von Partikeln zumindest mit einer interaktiven Darstellungsgeschwindigkeit möglich sein. . Da die Simulation auf der Grafikkarte (GPU) durchgeführt wird, sollten die Visualisierungstechniken die Daten des GPU-Speichers vollständig nutzen. Um die Qualität von trockenen Partikelmischungen wie Beton zu verbessern, wurde der Korngrößenverteilung große Aufmerksamkeit gewidmet, die die Raumfüllungsrate hauptsächlich beeinflusst und daher zwei der wichtigsten Eigenschaften des Betons bestimmt: die strukturelle Robustheit und die Haltbarkeit. Anhand der Korngrößenverteilung kann die Raumfüllungsrate durch Computersimulationen bestimmt werden, die analytischen Ansätzen in der Praxis wegen der breiten Größenverteilung der Partikel oft überlegen sind. Eine der weit verbreiteten Simulationsmethoden ist das Collective Rearrangement, bei dem die Partikel zunächst an zufälligen Positionen innerhalb eines Behälters platziert werden. Später werden Überlappungen zwischen Partikeln aufgelöst, indem überlappende Partikel voneinander weggedrückt werden. Durch geschickte Anpassung der Behältergröße während der Simulation, kann die Collective Rearrangement-Methode am Ende eine ziemlich dichte Partikelpackung generieren. Es ist jedoch sehr schwierig, den gesamten Simulationsprozess ohne ein interaktives Visualisierungstool zu optimieren oder dort Fehler zu finden. Ausgehend von der etablierten rasterisierungsbasierten Methode zum Darstellen einer großen Menge von Kugeln, bietet diese Dissertation zunächst schnelle und pixelgenaue Methoden zur neuartigen Visualisierung der Überlappungen und Freiräume zwischen kugelförmigen Partikeln innerhalb eines Behälters.. Die auf Rasterisierung basierenden Verfahren funktionieren gut für kleinere Partikelpackungen bis ca. eine Million Kugeln. Bei größeren Packungen entstehen Probleme durch die lineare Laufzeit und den Speicherverbrauch. Zur Lösung dieses Problems werden neue Methoden mit Hilfe von Raytracing zusammen mit zwei neuen Arten von Bounding-Volume-Hierarchien (BVHs) bereitgestellt. Diese können den Raytracing-Prozess deutlich beschleunigen --- die erste kann die vorhandene Datenstruktur für die Simulation wiederverwenden und die zweite ist speichereffizienter. Beide BVHs nutzen die Idee des Loose Octree und sind die ersten ihrer Art, die die Größe von Primitiven für interaktives Raytracing mit häufig aktualisierten Beschleunigungsdatenstrukturen berücksichtigen. Darüber hinaus können die Visualisierungstechniken in dieser Dissertation auch angepasst werden, um Eigenschaften wie das Volumen bestimmter Bereiche zu berechnen. All diese Visualisierungstechniken werden dann auf den Fall nicht-sphärischer Partikel erweitert, bei denen ein nicht-sphärisches Partikel durch ein starres System von Kugeln angenähert wird, um die vorhandene kugelbasierte Simulation wiederverwenden zu können. Dazu wird auch eine neue GPU-basierte Methode zum effizienten Füllen eines nicht-kugelförmigen Partikels mit polydispersen überlappenden Kugeln vorgestellt, so dass ein Partikel mit weniger Kugeln gefüllt werden kann, ohne die Raumfüllungsrate zu beeinträchtigen. Dies erleichtert sowohl die Simulation als auch die Visualisierung. Basierend auf den Arbeiten in dieser Dissertation können ausgefeiltere Algorithmen entwickelt werden, um großskalige nicht-sphärische Partikelmischungen effizienter zu visualisieren. Weiterhin kann in Zukunft Hardware-Raytracing neuerer Grafikkarten anstelle des in dieser Dissertation eingesetzten Software-Raytracing verwendet werden. Die neuen Techniken können auch als Grundlage für die interaktive Visualisierung anderer partikelbasierter Simulationen verwendet werden, bei denen spezielle Bereiche wie Freiräume oder Überlappungen zwischen Partikeln relevant sind.The aim of this dissertation is to find efficient techniques for visualizing and inspecting the geometry of particle packings. Simulations of such packings are used e.g. in material sciences to predict properties of granular materials. To better understand and supervise the behavior of these simulations, not only the particles themselves but also special areas formed by the particles that can show the progress of the simulation and spatial distribution of hot spots, should be visualized. This should be possible with a frame rate that allows interaction even for large scale packings with millions of particles. Moreover, given the simulation is conducted in the GPU, the visualization techniques should take full use of the data in the GPU memory. To improve the performance of granular materials like concrete, considerable attention has been paid to the particle size distribution, which is the main determinant for the space filling rate and therefore affects two of the most important properties of the concrete: the structural robustness and the durability. Given the particle size distribution, the space filling rate can be determined by computer simulations, which are often superior to analytical approaches due to irregularities of particles and the wide range of size distribution in practice. One of the widely adopted simulation methods is the collective rearrangement, for which particles are first placed at random positions inside a container, later overlaps between particles will be resolved by letting overlapped particles push away from each other to fill empty space in the container. By cleverly adjusting the size of the container according to the process of the simulation, the collective rearrangement method could get a pretty dense particle packing in the end. However, it is very hard to fine-tune or debug the whole simulation process without an interactive visualization tool. Starting from the well-established rasterization-based method to render spheres, this dissertation first provides new fast and pixel-accurate methods to visualize the overlaps and free spaces between spherical particles inside a container. The rasterization-based techniques perform well for small scale particle packings but deteriorate for large scale packings due to the large memory requirements that are hard to be approximated correctly in advance. To address this problem, new methods based on ray tracing are provided along with two new kinds of bounding volume hierarchies (BVHs) to accelerate the ray tracing process --- the first one can reuse the existing data structure for simulation and the second one is more memory efficient. Both BVHs utilize the idea of loose octree and are the first of their kind to consider the size of primitives for interactive ray tracing with frequently updated acceleration structures. Moreover, the visualization techniques provided in this dissertation can also be adjusted to calculate properties such as volumes of the specific areas. All these visualization techniques are then extended to non-spherical particles, where a non-spherical particle is approximated by a rigid system of spheres to reuse the existing simulation. To this end a new GPU-based method is presented to fill a non-spherical particle with polydisperse possibly overlapping spheres efficiently, so that a particle can be filled with fewer spheres without sacrificing the space filling rate. This eases both simulation and visualization. Based on approaches presented in this dissertation, more sophisticated algorithms can be developed to visualize large scale non-spherical particle mixtures more efficiently. Besides, one can try to exploit the hardware ray tracing of more recent graphic cards instead of maintaining the software ray tracing as in this dissertation. The new techniques can also become the basis for interactively visualizing other particle-based simulations, where special areas such as free space or overlaps between particles are of interest

    Compression of dynamic polygonal meshes with constant and variable connectivity

    Get PDF
    This work was supported by the projects 20-02154S and 17-07690S of the Czech Science Foundation and SGS-2019-016 of the Czech Ministry of Education.Polygonal mesh sequences with variable connectivity are incredibly versatile dynamic surface representations as they allow a surface to change topology or details to suddenly appear or disappear. This, however, comes at the cost of large storage size. Current compression methods inefficiently exploit the temporal coherence of general data because the correspondences between two subsequent frames might not be bijective. We study the current state of the art including the special class of mesh sequences for which connectivity is static. We also focus on the state of the art of a related field of dynamic point cloud sequences. Further, we point out parts of the compression pipeline with the possibility of improvement. We present the progress we have already made in designing a temporal model capturing the temporal coherence of the sequence, and point out to directions for a future research

    The development of GIS to aid conservation of architectural and archaeological sites using digital terrestrial photogrammetry

    Get PDF
    This thesis is concerned with the creation and implementation of an Architectural/Archaeological information System (A/AIS) by integrating digital terrestrial photogrammetry and CAD facilities as applicable to the requirements of architects, archaeologists and civil engineers. Architects and archaeologists are involved with the measurement, analysis and recording of the historical buildings and monuments. Hard-copy photogrammetric methods supporting such analyses and documentation are well established. But the requirement to interpret, classify and quantitatively process photographs can be time consuming. Also, they have limited application and cannot be re-examined if the information desired is not directly presented and a much more challenging extraction of 3-D coordinates than in a digital photogrammetric environment. The A/AIS has been developed to the point that it can provide a precise and reliable technique for non-contact 3-D measurements. The speed of on-line data acquisition, high degree of automation and adaptability has made this technique a powerful measurement tool with a great number of applications for architectural or archaeological sites. The designed tool (A/AIS) has been successful in producing the expected results in tasks examined for St. Avit Senieur Abbey in France, Strome Castle in Scotland, Gilbert Scott Building of Glasgow University, Hunter Memorial in Glasgow University and Anobanini Rock in Iran. The goals of this research were: to extract, using digital photogrammetric digitising, 3-D coordinates of architectural/archaeological features, to identify an appropriate 3-D model, to import 3-D points/lines into an appropriate 3-D modeller, to generate 3-D objects. to design and implement a prototype architectural Information System using the above 3-D model, to compare this approach to traditional approaches of measuring and archiving required information. An assessment of the contribution of digital photogrammetry, GIS and CAD to the surveying, conservation, recording and documentation of historical buildings and cultural monuments include digital rectification and restitution, feature extraction for the creation of 3-D digital models and the computer visualisation are the focus of this research

    Point cloud data compression

    Get PDF
    The rapid growth in the popularity of Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) experiences have resulted in an exponential surge of three-dimensional data. Point clouds have emerged as a commonly employed representation for capturing and visualizing three-dimensional data in these environments. Consequently, there has been a substantial research effort dedicated to developing efficient compression algorithms for point cloud data. This Master's thesis aims to investigate the current state-of-the-art lossless point cloud geometry compression techniques, explore some of these techniques in more detail and then propose improvements and/or extensions to enhance them and provide directions for future work on this topic
    corecore