90 research outputs found

    Hardware Acceleration of Progressive Refinement Radiosity using Nvidia RTX

    Full text link
    A vital component of photo-realistic image synthesis is the simulation of indirect diffuse reflections, which still remain a quintessential hurdle that modern rendering engines struggle to overcome. Real-time applications typically pre-generate diffuse lighting information offline using radiosity to avoid performing costly computations at run-time. In this thesis we present a variant of progressive refinement radiosity that utilizes Nvidia's novel RTX technology to accelerate the process of form-factor computation without compromising on visual fidelity. Through a modern implementation built on DirectX 12 we demonstrate that offloading radiosity's visibility component to RT cores significantly improves the lightmap generation process and potentially propels it into the domain of real-time.Comment: 114 page

    Object and Pattern Association for Robot Localization

    Get PDF

    Neural Scene Representations for 3D Reconstruction and Generative Modeling

    Get PDF
    With the increasing technologization of society, we use machines for more and more complex tasks, ranging from driving assistance to video conferencing, to exploring planets. The scene representation, i.e., how sensory data is converted to compact descriptions of the environment, is a fundamental property for enabling the success but also the safety of such systems. A promising approach for developing robust, adaptive, and powerful scene representations are learning-based systems that can adapt themselves from observations. Indeed, deep learning has revolutionized computer vision in recent years. In particular, better model architectures, large amounts of training data, and more powerful computing devices enabled deep learning systems with unprecedented performance, and they now set the state-of-the-art in many benchmarks, ranging from image classification, to object detection, to semantic segmentation. Despite these successes, the way these systems operate is still fundamentally different from human cognition. In particular, most approaches operate in the 2D domain, while humans understand that images are projections of the three-dimensional world. In addition, they often do not follow a compositional understanding of scenes, which is fundamental to human reasoning. In this thesis, our goal is to develop scene representations that enable autonomous agents to navigate and act robustly and safely in complex environments while reasoning compositionally in 3D. To this end, we first propose a novel output representation for deep learning-based 3D reconstruction and generative modeling. We find that, compared to previous representations, our neural field-based approach does not require 3D space to be discretized achieving reconstructions at arbitrary resolution with a constant memory footprint. Next, we develop a differentiable rendering technique to infer these neural field-based 3D shape and texture representations from 2D observations and find that this allows us to scale to more complex, real-world scenarios. Subsequently, we combine our novel 3D shape representation with a spatially and temporally continuous vector field to model non-rigid shapes in motion. We observe that our novel 4D representation can be used for various discriminative and generative tasks, ranging from 4D reconstruction to 4D interpolation, to motion transfer. Finally, we develop an object-centric generative model that can generate 3D scenes in a compositional manner and that allows for photorealistic renderings of generated scenes. We find that our model not only improves image fidelity but also enables more controllable scene generation and image synthesis than prior work while training only from raw, unposed image collections

    Object and Pattern Association for Robot Localization

    Get PDF

    Proceedings - 32. Workshop Computational Intelligence: Berlin, 1. - 2. Dezember 2022

    Get PDF
    This conference volume contains the contributions of the 32nd workshop "Computational Intelligence" of the Technical Committee 5.14 of the VDI/VDE Society for Measurement and Automation Technology (GMA) of 1.12. – 2.12.2022 in Berlin. The focus is on methods, applications and tools for

    Advanced analytical diagnostics applied to human osteological remains

    Get PDF
    Le ossa antiche, recuperate dai contesti archeologici e preservati all’interno dei Musei, rappresentano una preziosa fonte di informazioni sull'alimentazione, lo stato di salute, la mobilitĂ  delle popolazioni antiche nonchĂ© sulla demografia e condizioni ambientali del passato, utili a ricercatori e accademici. A seguito dello sviluppo di moderne tecnologie delle scienze omiche, i reperti osteologici sono sempre piĂč richiesti e questo ha comportato un aumento dell'analisi del DNA antico (aDNA). I metodi di campionamento per l'estrazione del DNA antico sono prevalentemente distruttivi e spesso possono compromettere i reperti osteologici per ulteriori future analisi o per studi in altri campi di ricerca. Oltre al campionamento invasivo e distruttivo, in condizioni di scarsa conservazione dell’osso archeologico causata da alterazioni tafonomiche e diagenetiche, il sequenziamento del DNA antico puĂČ essere un'operazione estremamente costosa. Dati gli elevati costi della procedura di sequenziamento dell'aDNA, in questo lavoro di ricerca Ăš stato condotto uno studio analitico mediante spettroscopia a raggi infrarossi (FTIR) per sviluppare un metodo di pre-screening affidabile, veloce ed economico per determinare la presenza/assenza di molecole genetiche in un campione osseo archeologico. La spettroscopia IR Ăš uno strumento utile in quanto Ăš rapida, minimamente distruttiva, economica e sensibile alle variazioni delle proprietĂ  strutturali delle componenti organiche (collagene) e inorganiche (nano cristalli di bioapatite) che costituiscono l’osso. A livello ultrastrutturale, le componenti organiche e inorganiche possono stabilire forti legami con il DNA , stabilizzandolo e determinando la sua sopravvivenza nel tempo. Da campioni archeologici (di epoche e provenienze diverse) estremamente alterati a moderne ossa fresche, abbiamo valutato la sensibilitĂ  e l'efficacia di nuovi parametri IR per caratterizzare la diagenesi subita dalle ossa tenendo in considerazioni i cambiamenti delle condizioni climatico–ambientali e di seppellimento. Il lavoro Ăš stato esteso per esaminare le modificazioni indotte dalla diagenesi sulla struttura secondaria del collagene conservato, valutandone gli effetti sui cristalli di bioapatite. I risultati ottenuti dimostrano che il parametro IR che descrive l’ordine/disordine atomico, utilizzato in questa ricerca, Ăš vantaggioso per il monitoraggio di variazioni minime nella struttura e nelle proprietĂ  chimiche della bioapatite nonchĂ© indirettamente nel collagene. Questo metodo potrebbe migliorare il processo di selezione dei campioni ossei nonchĂ© la loro idoneitĂ  per analisi specifiche, ad es. analisi genetiche, paleoproteomiche e degli isotopi stabili sulla base delle analisi spettrali. Viene qui proposto inoltre un modello predittivo funzionale con i parametri infrarossi utilizzati, al fine di determinare il parametro piĂč predittivo per la prensenza/assenza di DNA, utile per ridurre i costi delle analisi genetiche. Dai dati ottenuti, la qualitĂ /quantitĂ  di aDNA risulterebbe non essere determinabile a causa dell'influenza di fattori ambientali locali.Ancient bone tissues, recovered from archaeological contexts and preserved within the Museums, represent a valuable source of information on health, diet, mobility of ancient populations as well as on demographics and environmental conditions of the past, useful for researchers and academics. Following the development of modern technologies of omic sciences, osteological finds are increasingly requested and this has led to an increase in the analysis of ancient DNA (aDNA). Sampling methods for ancient DNA extraction are predominantly destructive and may often compromise osteological findings for further future analysis or for studies in other research fields. In addition to invasive and destructive sampling, in poor conservation conditions of the archaeological bone caused by taphonomic and diagenetic alterations, the sequencing of ancient DNA can be an extremely expensive operation. Given the high costs of the aDNA sequencing procedure, an analytical study by means of infrared spectroscopy (FTIR) was conducted in this research work to develop a reliable, fast and inexpensive pre-screening method to determine presence/absence of genetic molecules in an archaeological bone sample. Infrared spectroscopy is a useful tool fast, minimally destructive, inexpensive and sensitive to changes in the structural properties of the organic (collagen) and inorganic (bioapatite nanocrystals) components that make up bone. At the ultrastructural level, the organic and inorganic components of bone may stabilize strong bounds with DNA, stabilizing it and determining its survival over time. The sensitivity and efficiency of new IR parameters was tested on fresh bones and extremely altered archaeological samples, characterized by different chronology and origin. The diagenesis undergone by the bones was characterized taking into account changes in climatic-environmental and burial conditions. The research was expanded by examining changes induced by diagenesis on the secondary structure of collagen preserved, evaluating their effects on bioapatite crystals. The results obtained demonstrate that the IR parameter used in this research, that describes the atomic order/disorder, is advantageous for monitoring minimal changes in the structure and chemical properties of bioapatite as well as indirectly in collagen. This method may improve the selection process of bone samples as well as their suitability for specific analyzes, e.g. genetic, paleo-proteomic and stable isotope analysis on the basis of infrared spectra. A functional predictive model with the infrared parameters used, in order to determine the most predictive parameter for the presence/absence of DNA, allowing to reduce the costs of genetic analyzes, was proposed here. The results obtained, shows that the quality/quantity of aDNA cannot be determined due to the influence of local environmental factors

    Proceedings - 32. Workshop Computational Intelligence: Berlin, 1. - 2. Dezember 2022

    Get PDF
    Dieser Tagungsband enthĂ€lt die BeitrĂ€ge des 32. Workshops „Computational Intelligence“ des Fachausschusses 5.14 der VDI/VDE-Gesellschaft fĂŒr Mess- und Automatisierungstechnik (GMA) der vom 1.12. – 2.12.2022 in Berlin stattfand. Die Schwerpunkte sind Methoden, Anwendungen und Tools fĂŒr - Fuzzy-Systeme - Deep Learning - Machine Learning sowie der Methodenvergleich anhand von industriellen und Benchmark-Problemen

    Solid modelling for manufacturing: from Voelcker's boundary evaluation to discrete paradigms

    Get PDF
    Herb Voelcker and his research team laid the foundations of Solid Modelling, on which Computer-Aided Design is based. He founded the ambitious Production Automation Project, that included Constructive Solid Geometry (CSG) as the basic 3D geometric representation. CSG trees were compact and robust, saving a memory space that was scarce in those times. But the main computational problem was Boundary Evaluation: the process of converting CSG trees to Boundary Representations (BReps) with explicit faces, edges and vertices for manufacturing and visualization purposes. This paper presents some glimpses of the history and evolution of some ideas that started with Herb Voelcker. We briefly describe the path from “localization and boundary evaluation” to “localization and printing”, with many intermediate steps driven by hardware, software and new mathematical tools: voxel and volume representations, triangle meshes, and many others, observing also that in some applications, voxel models no longer require Boundary Evaluation. In this last case, we consider the current research challenges and discuss several avenues for further research.Project TIN2017-88515-C2-1-R funded by MCIN/AEI/10.13039/501100011033/FEDER‘‘A way to make Europe’’Peer ReviewedPostprint (published version

    Visualization and inspection of the geometry of particle packings

    Get PDF
    Gegenstand dieser Dissertation ist die Entwicklung von effizienten Verfahren zur Visualisierung und Inspektion der Geometrie von Partikelmischungen. Um das Verhalten der Simulation fĂŒr die Partikelmischung besser zu verstehen und zu ĂŒberwachen, sollten nicht nur die Partikel selbst, sondern auch spezielle von den Partikeln gebildete Bereiche, die den Simulationsfortschritt und die rĂ€umliche Verteilung von Hotspots anzeigen können, visualisiert werden können. Dies sollte auch bei großen Packungen mit Millionen von Partikeln zumindest mit einer interaktiven Darstellungsgeschwindigkeit möglich sein. . Da die Simulation auf der Grafikkarte (GPU) durchgefĂŒhrt wird, sollten die Visualisierungstechniken die Daten des GPU-Speichers vollstĂ€ndig nutzen. Um die QualitĂ€t von trockenen Partikelmischungen wie Beton zu verbessern, wurde der KorngrĂ¶ĂŸenverteilung große Aufmerksamkeit gewidmet, die die RaumfĂŒllungsrate hauptsĂ€chlich beeinflusst und daher zwei der wichtigsten Eigenschaften des Betons bestimmt: die strukturelle Robustheit und die Haltbarkeit. Anhand der KorngrĂ¶ĂŸenverteilung kann die RaumfĂŒllungsrate durch Computersimulationen bestimmt werden, die analytischen AnsĂ€tzen in der Praxis wegen der breiten GrĂ¶ĂŸenverteilung der Partikel oft ĂŒberlegen sind. Eine der weit verbreiteten Simulationsmethoden ist das Collective Rearrangement, bei dem die Partikel zunĂ€chst an zufĂ€lligen Positionen innerhalb eines BehĂ€lters platziert werden. SpĂ€ter werden Überlappungen zwischen Partikeln aufgelöst, indem ĂŒberlappende Partikel voneinander weggedrĂŒckt werden. Durch geschickte Anpassung der BehĂ€ltergrĂ¶ĂŸe wĂ€hrend der Simulation, kann die Collective Rearrangement-Methode am Ende eine ziemlich dichte Partikelpackung generieren. Es ist jedoch sehr schwierig, den gesamten Simulationsprozess ohne ein interaktives Visualisierungstool zu optimieren oder dort Fehler zu finden. Ausgehend von der etablierten rasterisierungsbasierten Methode zum Darstellen einer großen Menge von Kugeln, bietet diese Dissertation zunĂ€chst schnelle und pixelgenaue Methoden zur neuartigen Visualisierung der Überlappungen und FreirĂ€ume zwischen kugelförmigen Partikeln innerhalb eines BehĂ€lters.. Die auf Rasterisierung basierenden Verfahren funktionieren gut fĂŒr kleinere Partikelpackungen bis ca. eine Million Kugeln. Bei grĂ¶ĂŸeren Packungen entstehen Probleme durch die lineare Laufzeit und den Speicherverbrauch. Zur Lösung dieses Problems werden neue Methoden mit Hilfe von Raytracing zusammen mit zwei neuen Arten von Bounding-Volume-Hierarchien (BVHs) bereitgestellt. Diese können den Raytracing-Prozess deutlich beschleunigen --- die erste kann die vorhandene Datenstruktur fĂŒr die Simulation wiederverwenden und die zweite ist speichereffizienter. Beide BVHs nutzen die Idee des Loose Octree und sind die ersten ihrer Art, die die GrĂ¶ĂŸe von Primitiven fĂŒr interaktives Raytracing mit hĂ€ufig aktualisierten Beschleunigungsdatenstrukturen berĂŒcksichtigen. DarĂŒber hinaus können die Visualisierungstechniken in dieser Dissertation auch angepasst werden, um Eigenschaften wie das Volumen bestimmter Bereiche zu berechnen. All diese Visualisierungstechniken werden dann auf den Fall nicht-sphĂ€rischer Partikel erweitert, bei denen ein nicht-sphĂ€risches Partikel durch ein starres System von Kugeln angenĂ€hert wird, um die vorhandene kugelbasierte Simulation wiederverwenden zu können. Dazu wird auch eine neue GPU-basierte Methode zum effizienten FĂŒllen eines nicht-kugelförmigen Partikels mit polydispersen ĂŒberlappenden Kugeln vorgestellt, so dass ein Partikel mit weniger Kugeln gefĂŒllt werden kann, ohne die RaumfĂŒllungsrate zu beeintrĂ€chtigen. Dies erleichtert sowohl die Simulation als auch die Visualisierung. Basierend auf den Arbeiten in dieser Dissertation können ausgefeiltere Algorithmen entwickelt werden, um großskalige nicht-sphĂ€rische Partikelmischungen effizienter zu visualisieren. Weiterhin kann in Zukunft Hardware-Raytracing neuerer Grafikkarten anstelle des in dieser Dissertation eingesetzten Software-Raytracing verwendet werden. Die neuen Techniken können auch als Grundlage fĂŒr die interaktive Visualisierung anderer partikelbasierter Simulationen verwendet werden, bei denen spezielle Bereiche wie FreirĂ€ume oder Überlappungen zwischen Partikeln relevant sind.The aim of this dissertation is to find efficient techniques for visualizing and inspecting the geometry of particle packings. Simulations of such packings are used e.g. in material sciences to predict properties of granular materials. To better understand and supervise the behavior of these simulations, not only the particles themselves but also special areas formed by the particles that can show the progress of the simulation and spatial distribution of hot spots, should be visualized. This should be possible with a frame rate that allows interaction even for large scale packings with millions of particles. Moreover, given the simulation is conducted in the GPU, the visualization techniques should take full use of the data in the GPU memory. To improve the performance of granular materials like concrete, considerable attention has been paid to the particle size distribution, which is the main determinant for the space filling rate and therefore affects two of the most important properties of the concrete: the structural robustness and the durability. Given the particle size distribution, the space filling rate can be determined by computer simulations, which are often superior to analytical approaches due to irregularities of particles and the wide range of size distribution in practice. One of the widely adopted simulation methods is the collective rearrangement, for which particles are first placed at random positions inside a container, later overlaps between particles will be resolved by letting overlapped particles push away from each other to fill empty space in the container. By cleverly adjusting the size of the container according to the process of the simulation, the collective rearrangement method could get a pretty dense particle packing in the end. However, it is very hard to fine-tune or debug the whole simulation process without an interactive visualization tool. Starting from the well-established rasterization-based method to render spheres, this dissertation first provides new fast and pixel-accurate methods to visualize the overlaps and free spaces between spherical particles inside a container. The rasterization-based techniques perform well for small scale particle packings but deteriorate for large scale packings due to the large memory requirements that are hard to be approximated correctly in advance. To address this problem, new methods based on ray tracing are provided along with two new kinds of bounding volume hierarchies (BVHs) to accelerate the ray tracing process --- the first one can reuse the existing data structure for simulation and the second one is more memory efficient. Both BVHs utilize the idea of loose octree and are the first of their kind to consider the size of primitives for interactive ray tracing with frequently updated acceleration structures. Moreover, the visualization techniques provided in this dissertation can also be adjusted to calculate properties such as volumes of the specific areas. All these visualization techniques are then extended to non-spherical particles, where a non-spherical particle is approximated by a rigid system of spheres to reuse the existing simulation. To this end a new GPU-based method is presented to fill a non-spherical particle with polydisperse possibly overlapping spheres efficiently, so that a particle can be filled with fewer spheres without sacrificing the space filling rate. This eases both simulation and visualization. Based on approaches presented in this dissertation, more sophisticated algorithms can be developed to visualize large scale non-spherical particle mixtures more efficiently. Besides, one can try to exploit the hardware ray tracing of more recent graphic cards instead of maintaining the software ray tracing as in this dissertation. The new techniques can also become the basis for interactively visualizing other particle-based simulations, where special areas such as free space or overlaps between particles are of interest
    • 

    corecore