90 research outputs found

    Realtime ray tracing and interactive global illumination

    Get PDF
    One of the most sought-for goals in computer graphics is to generate "realism in real time". i.e. the generation of realistically looking images at realtime frame rates. Today, virtually all approaches towards realtime rendering use graphics hardware, which is based almost exclusively on triangle rasterization. Unfortunately, though this technology has seen tremendous progress over the last few years, for many applications it is currently reaching its limits in both model complexity, supported features, and achievable realism. An alternative to triangle rasterizations is the ray tracing algorithm, which is well-known for its higher flexibility, its generally higher achievable realism, and its superior scalability in both model size and compute power. However, ray tracing is also computationally demanding and thus so far is used almost exclusively for high-quality offline rendering tasks. This dissertation focuses on the question why ray tracing is likely to soon play a larger role for interactive applications, and how this scenario can be reached. To this end, we discuss the RTRT/OpenRT realtime ray tracing system, a software based ray tracing system that achieves interactive to realtime frame rates on todays commodity CPUs. In particular, we discuss the overall system design, the efficient implementation of the core ray tracing algorithms, techniques for handling dynamic scenes, an efficient parallelization framework, and an OpenGL-like low-level API. Taken together, these techniques form a complete realtime rendering engine that supports massively complex scenes, highley realistic and physically correct shading, and even physically based lighting simulation at interactive rates. In the last part of this thesis we then discuss the implications and potential of realtime ray tracing on global illumination, and how the availability of this new technology can be leveraged to finally achieve interactive global illumination - the physically correct simulation of light transport at interactive rates.Eines der wichtigsten Ziele der Computer-Graphik ist die Generierung von "Realismus in Echtzeit\u27; — die Erzeugung von realistisch wirkenden, computer- generierten Bildern in Echtzeit. Heutige Echtzeit-Graphikanwendungen werden derzeit zum ĂŒberwiegenden Teil mit schneller Graphik-Hardware realisiert, welche zum aktuellen Stand der Technik fast ausschliesslich auf dem Dreiecksrasterisierungsalgorithmus basiert. Obwohl diese Rasterisierungstechnologie in den letzten Jahren zunehmend beeindruckende Fortschritte gemacht hat, stĂ¶ĂŸt sie heutzutage zusehends an ihre Grenzen, speziell im Hinblick auf ModellkomplexitĂ€t, unterstĂŒtzte Beleuchtungseffekte, und erreichbaren Realismus. Eine Alternative zur Dreiecksrasterisierung ist das "Ray-Tracing\u27; (Stahl-RĂŒckverfolgung), welches weithin bekannt ist fĂŒr seine höhere FlexibilitĂ€t, seinen im Großen und Ganzen höheren erreichbaren Realismus, und seine bessere Skalierbarkeit sowohl in SzenengrĂ¶ĂŸe als auch in Rechner-KapazitĂ€ten. Allerdings ist Ray-Tracing ebenso bekannt fĂŒr seinen hohen Rechenbedarf, und wird daher heutzutage fast ausschließlich fĂŒr die hochqualitative, nichtinteraktive Bildsynthese benutzt. Diese Dissertation behandelt die GrĂŒnde warum Ray-Tracing in nĂ€herer Zukunft voraussichtlich eine grĂ¶ĂŸere Rolle fĂŒr interaktive Graphikanwendungen spielen wird, und untersucht, wie dieses Szenario des Echtzeit Ray-Tracing erreicht werden kann. HierfĂŒr stellen wir das RTRT/OpenRT Echtzeit Ray-Tracing System vor, ein software-basiertes Ray-Tracing System, welches es erlaubt, interaktive Performanz auf heutigen Standard-PC-Prozessoren zu erreichen. Speziell diskutieren wir das grundlegende System-Design, die effiziente Implementierung der Kern-Algorithmen, Techniken zur UnterstĂŒtzung von dynamischen Szenen, ein effizientes Parallelisierungs-Framework, und eine OpenGL-Ă€hnliche Anwendungsschnittstelle. In ihrer Gesamtheit formen diese Techniken ein komplettes Echtzeit-Rendering-System, welches es erlaubt, extrem komplexe Szenen, hochgradig realistische und physikalisch korrekte Effekte, und sogar physikalisch-basierte Beleuchtungssimulation interaktiv zu berechnen. Im letzten Teil der Dissertation behandeln wir dann die Implikationen und das Potential, welches Echtzeit Ray-Tracing fĂŒr die Globale Beleuchtungssimulation bietet, und wie die VerfĂŒgbarkeit dieser neuen Technologie benutzt werden kann, um letztendlich auch Globale Belechtung — die physikalisch korrekte Simulation des Lichttransports — interaktiv zu berechnen

    Spatial Database Support for Virtual Engineering

    Get PDF
    The development, design, manufacturing and maintenance of modern engineering products is a very expensive and complex task. Shorter product cycles and a greater diversity of models are becoming decisive competitive factors in the hard-fought automobile and plane market. In order to support engineers to create complex products when being pressed for time, systems are required which answer collision and similarity queries effectively and efficiently. In order to achieve industrial strength, the required specialized functionality has to be integrated into fully-fledged database systems, so that fundamental services of these systems can be fully reused, including transactions, concurrency control and recovery. This thesis aims at the development of theoretical sound and practical realizable algorithms which effectively and efficiently detect colliding and similar complex spatial objects. After a short introductory Part I, we look in Part II at different spatial index structures and discuss their integrability into object-relational database systems. Based on this discussion, we present two generic approaches for accelerating collision queries. The first approach exploits available statistical information in order to accelerate the query process. The second approach is based on a cost-based decompositioning of complex spatial objects. In a broad experimental evaluation based on real-world test data sets, we demonstrate the usefulness of the presented techniques which allow interactive query response times even for large data sets of complex objects. In Part III of the thesis, we discuss several similarity models for spatial objects. We show by means of a new evaluation method that data-partitioning similarity models yield more meaningful results than space-partitioning similarity models. We introduce a very effective similarity model which is based on a new paradigm in similarity search, namely the use of vector set represented objects. In order to guarantee efficient query processing, suitable filters are introduced for accelerating similarity queries on complex spatial objects. Based on clustering and the introduced similarity models we present an industrial prototype which helps the user to navigate through massive data sets.Ein schneller und reibungsloser Entwicklungsprozess neuer Produkte ist ein wichtiger Faktor fĂŒr den wirtschaftlichen Erfolg vieler Unternehmen insbesondere aus der Luft- und Raumfahrttechnik und der Automobilindustrie. Damit Ingenieure in immer kĂŒrzerer Zeit immer anspruchsvollere Produkte entwickeln können, werden effektive und effiziente Kollisions- und Ähnlichkeitsanfragen auf komplexen rĂ€umlichen Objekten benötigt. Um den hohen Anforderungen eines produktiven Einsatzes zu genĂŒgen, mĂŒssen entsprechend spezialisierte Zugriffsmethoden in vollwertige Datenbanksysteme integriert werden, so dass zentrale Datenbankdienste wie Trans-aktionen, kontrollierte NebenlĂ€ufigkeit und Wiederanlauf sichergestellt sind. Ziel dieser Doktorarbeit ist es deshalb, effektive und effiziente Algorithmen fĂŒr Kollisions- und Ähnlichkeitsanfragen auf komplexen rĂ€umlichen Objekten zu ent-wickeln und diese in kommerzielle Objekt-Relationale Datenbanksysteme zu integrieren. Im ersten Teil der Arbeit werden verschiedene rĂ€umliche Indexstrukturen zur effizienten Bearbeitung von Kollisionsanfragen diskutiert und auf ihre IntegrationsfĂ€higkeit in Objekt-Relationale Datenbanksysteme hin untersucht. Daran an-knĂŒpfend werden zwei generische Verfahren zur Beschleunigung von Kollisionsanfragen vorgestellt. Das erste Verfahren benutzt statistische Informationen rĂ€umlicher Indexstrukturen, um eine gegebene Anfrage zu beschleunigen. Das zweite Verfahren beruht auf einer kostenbasierten Zerlegung komplexer rĂ€umlicher Datenbank- Objekte. Diese beiden Verfahren ergĂ€nzen sich gegenseitig und können unabhĂ€ngig voneinander oder zusammen eingesetzt werden. In einer ausfĂŒhrlichen experimentellen Evaluation wird gezeigt, dass die beiden vorgestellten Verfahren interaktive Kollisionsanfragen auf umfangreichen Datenmengen und komplexen Objekten ermöglichen. Im zweiten Teil der Arbeit werden verschiedene Ähnlichkeitsmodelle fĂŒr rĂ€um-liche Objekte vorgestellt. Es wird experimentell aufgezeigt, dass datenpartitionierende Modelle effektiver sind als raumpartitionierende Verfahren. Weiterhin werden geeignete Filtertechniken zur Beschleunigung des Anfrageprozesses entwickelt und experimentell untersucht. Basierend auf Clustering und den entwickelten Ähnlichkeitsmodellen wird ein industrietauglicher Prototyp vorgestellt, der Benutzern hilft, durch große Datenmengen zu navigieren

    Appearance Preserving Rendering of Out-of-Core Polygon and NURBS Models

    Get PDF
    In Computer Aided Design (CAD) trimmed NURBS surfaces are widely used due to their flexibility. For rendering and simulation however, piecewise linear representations of these objects are required. A relatively new field in CAD is the analysis of long-term strain tests. After such a test the object is scanned with a 3d laser scanner for further processing on a PC. In all these areas of CAD the number of primitives as well as their complexity has grown constantly in the recent years. This growth is exceeding the increase of processor speed and memory size by far and posing the need for fast out-of-core algorithms. This thesis describes a processing pipeline from the input data in the form of triangular or trimmed NURBS models until the interactive rendering of these models at high visual quality. After discussing the motivation for this work and introducing basic concepts on complex polygon and NURBS models, the second part of this thesis starts with a review of existing simplification and tessellation algorithms. Additionally, an improved stitching algorithm to generate a consistent model after tessellation of a trimmed NURBS model is presented. Since surfaces need to be modified interactively during the design phase, a novel trimmed NURBS rendering algorithm is presented. This algorithm removes the bottleneck of generating and transmitting a new tessellation to the graphics card after each modification of a surface by evaluating and trimming the surface on the GPU. To achieve high visual quality, the appearance of a surface can be preserved using texture mapping. Therefore, a texture mapping algorithm for trimmed NURBS surfaces is presented. To reduce the memory requirements for the textures, the algorithm is modified to generate compressed normal maps to preserve the shading of the original surface. Since texturing is only possible, when a parametric mapping of the surface - requiring additional memory - is available, a new simplification and tessellation error measure is introduced that preserves the appearance of the original surface by controlling the deviation of normal vectors. The preservation of normals and possibly other surface attributes allows interactive visualization for quality control applications (e.g. isophotes and reflection lines). In the last part out-of-core techniques for processing and rendering of gigabyte-sized polygonal and trimmed NURBS models are presented. Then the modifications necessary to support streaming of simplified geometry from a central server are discussed and finally and LOD selection algorithm to support interactive rendering of hard and soft shadows is described

    Management and Visualisation of Non-linear History of Polygonal 3D Models

    Get PDF
    The research presented in this thesis concerns the problems of maintenance and revision control of large-scale three dimensional (3D) models over the Internet. As the models grow in size and the authoring tools grow in complexity, standard approaches to collaborative asset development become impractical. The prevalent paradigm of sharing files on a file system poses serious risks with regards, but not limited to, ensuring consistency and concurrency of multi-user 3D editing. Although modifications might be tracked manually using naming conventions or automatically in a version control system (VCS), understanding the provenance of a large 3D dataset is hard due to revision metadata not being associated with the underlying scene structures. Some tools and protocols enable seamless synchronisation of file and directory changes in remote locations. However, the existing web-based technologies are not yet fully exploiting the modern design patters for access to and management of alternative shared resources online. Therefore, four distinct but highly interconnected conceptual tools are explored. The first is the organisation of 3D assets within recent document-oriented No Structured Query Language (NoSQL) databases. These "schemaless" databases, unlike their relational counterparts, do not represent data in rigid table structures. Instead, they rely on polymorphic documents composed of key-value pairs that are much better suited to the diverse nature of 3D assets. Hence, a domain-specific non-linear revision control system 3D Repo is built around a NoSQL database to enable asynchronous editing similar to traditional VCSs. The second concept is that of visual 3D differencing and merging. The accompanying 3D Diff tool supports interactive conflict resolution at the level of scene graph nodes that are de facto the delta changes stored in the repository. The third is the utilisation of HyperText Transfer Protocol (HTTP) for the purposes of 3D data management. The XML3DRepo daemon application exposes the contents of the repository and the version control logic in a Representational State Transfer (REST) style of architecture. At the same time, it manifests the effects of various 3D encoding strategies on the file sizes and download times in modern web browsers. The fourth and final concept is the reverse-engineering of an editing history. Even if the models are being version controlled, the extracted provenance is limited to additions, deletions and modifications. The 3D Timeline tool, therefore, implies a plausible history of common modelling operations such as duplications, transformations, etc. Given a collection of 3D models, it estimates a part-based correspondence and visualises it in a temporal flow. The prototype tools developed as part of the research were evaluated in pilot user studies that suggest they are usable by the end users and well suited to their respective tasks. Together, the results constitute a novel framework that demonstrates the feasibility of a domain-specific 3D version control

    New Concepts for Virtual Testbeds : Data Mining Algorithms for Blackbox Optimization based on Wait-Free Concurrency and Generative Simulation

    Get PDF
    Virtual testbeds have emerged as a key technology for improving and streamlining complex engineering processes by delivering long-term simulation and assessment of complex designs in virtual environments. In contrast to existing simulation technology, virtual testbeds focus on long-term physically-based simulation of the overall design in its (virtual) environment instead of only focussing on isolated, specific parts for short periods of time. This technology has the major advantage that costly testing, prototyping, and assessment in real-life environments are replaced by a cost-efficient simulation in virtual worlds for comprehensive and long-term analysis of designs. For this purpose, engineering models and their requirements are abstracted into software simulation models and objectives which are executed in virtual assessments. Simulation models are used to predict complex, real systems which can be further a subject to random influences. These predictions are used to examine the effects of individual configuration alternatives without actually realizing them and causing possible negative effects on the real system. Virtual testbeds further offer engineers the opportunity to immersively and naturally interact with their simulation model in these virtual assessments. This enables a greater and comprehensive understanding of possible design flaws early-on in the design process for engineers because they can directly assess their design in the virtual environment, based on the simulation objectives. The fact that virtual testbeds enable these realtime interactive virtual assessments, makes their underlying software infrastructure very complex. One major challenge is to minimize the development time of virtual testbeds in order to efficiently integrate them into the overall engineering process. Usually, this can be achieved by minimizing the underlying concurrency of the testbed and by simplifying its software architecture. However, this may result in a degradation of their very concurrent and asynchronous behavior, which is usually required for immersive and natural virtual interaction. A major goal of virtual testbeds in the engineering process is to find a set of optimal configurations of the simulation model which maximizes all simulation objectives for the specified virtual assessments. Once such a set has been computed, engineers can interactively explore it in the virtual environment. The main challenge is that sophisticated simulation models and their configuration are subject to a multiobjective optimization problem, which usually can not be solved manually by engineers or simulation analysts in feasible time. This is further aggravated because the relationships between simulation model configurations and simulation objectives are mostly unknown, leading to what is known as blackbox simulations. In this thesis, I propose novel data mining algorithms for computing Pareto optimal simulation model configurations, based on an approximation of the feasible design space, for deterministic and stochastic blackbox simulations in virtual testbeds for achieving above stated goal. These novel data mining algorithms lead to an automatic knowledge discovery process that does not need any supervision for its data analysis and assessment for multiobjective optimization problems of simulation model configurations. This achieves the previously stated goal of computing optimal configurations of simulation models for long-term simulations and assessments. Furthermore, I propose two complementary solutions for efficiently integrating massively-parallel virtual testbeds into engineering processes. First, I propose a novel multiversion wait-free data and concurrency management based on hash maps. These wait-free hash maps do not require any standard locking mechanisms and enable low-latency data generation, management and distribution for massively-parallel applications. Second, I propose novel concepts for efficiently code generating above wait-free data and concurrency management for arbitrary massively-parallel simulation applications of virtual testbeds. My generative simulation concept combines a state-of-the-art realtime interactive system design pattern for high maintainability with template code generation based on domain specific modelling. This concept is able to generate massively-parallel simulations and, at the same time, model checks its internal dataflow for possible interface errors. These generative concept overcomes the challenge of efficiently integrating virtual testbeds into engineering processes. These contributions enable for the first time a powerful collaboration between simulation, optimization, visualization and data analysis for novel virtual testbed applications but also overcome and achieve the presented challenges and goals

    Mobile three-dimensional city maps

    Get PDF
    Maps are visual representations of environments and the objects within, depicting their spatial relations. They are mainly used in navigation, where they act as external information sources, supporting observation and decision making processes. Map design, or the art-science of cartography, has led to simplification of the environment, where the naturally three-dimensional environment has been abstracted to a two-dimensional representation, populated with simple geometrical shapes and symbols. However, abstract representation requires a map reading ability. Modern technology has reached the level where maps can be expressed in digital form, having selectable, scalable, browsable and updatable content. Maps may no longer even be limited to two dimensions, nor to an abstract form. When a real world based virtual environment is created, a 3D map is born. Given a realistic representation, would the user no longer need to interpret the map, and be able to navigate in an inherently intuitive manner? To answer this question, one needs a mobile test platform. But can a 3D map, a resource hungry real virtual environment, exist on such resource limited devices? This dissertation approaches the technical challenges posed by mobile 3D maps in a constructive manner, identifying the problems, developing solutions and providing answers by creating a functional system. The case focuses on urban environments. First, optimization methods for rendering large, static 3D city models are researched and a solution provided by combining visibility culling, level-of-detail management and out-of-core rendering, suited for mobile 3D maps. Then, the potential of mobile networking is addressed, developing efficient and scalable methods for progressive content downloading and dynamic entity management. Finally, a 3D navigation interface is developed for mobile devices, and the research validated with measurements and field experiments. It is found that near realistic mobile 3D city maps can exist in current mobile phones, and the rendering rates are excellent in 3D hardware enabled devices. Such 3D maps can also be transferred and rendered on-the-fly sufficiently fast for navigation use over cellular networks. Real world entities such as pedestrians or public transportation can be tracked and presented in a scalable manner. Mobile 3D maps are useful for navigation, but their usability depends highly on interaction methods - the potentially intuitive representation does not imply, for example, faster navigation than with a professional 2D street map. In addition, the physical interface limits the usability

    PolyVR - A Virtual Reality Authoring Framework for Engineering Applications

    Get PDF
    Die virtuelle RealitĂ€t ist ein fantastischer Ort, frei von EinschrĂ€nkungen und vielen Möglichkeiten. FĂŒr Ingenieure ist dies der perfekte Ort, um Wissenschaft und Technik zu erleben, es fehlt jedoch die Infrastruktur, um die virtuelle RealitĂ€t zugĂ€nglich zu machen, insbesondere fĂŒr technische Anwendungen. Diese Arbeit bescheibt die Entstehung einer Softwareumgebung, die eine einfachere Entwicklung von Virtual-Reality-Anwendungen und deren Implementierung in immersiven Hardware-Setups ermöglicht. Virtual Engineering, die Verwendung virtueller Umgebungen fĂŒr Design-Reviews wĂ€hrend des Produktentwicklungsprozesses, wird insbesondere von kleinen und mittleren Unternehmen nur Ă€ußerst selten eingesetzt. Die HauptgrĂŒnde sind nicht mehr die hohen Kosten fĂŒr professionelle Virtual-Reality-Hardware, sondern das Fehlen automatisierter VirtualisierungsablĂ€ufe und die hohen Wartungs- und Softwareentwicklungskosten. Ein wichtiger Aspekt bei der Automatisierung von Virtualisierung ist die Integration von Intelligenz in kĂŒnstlichen Umgebungen. Ontologien sind die Grundlage des menschlichen Verstehens und der Intelligenz. Die Kategorisierung unseres Universums in Begriffe, Eigenschaften und Regeln ist ein grundlegender Schritt von Prozessen wie Beobachtung, Lernen oder Wissen. Diese Arbeit zielt darauf ab, einen Schritt zu einem breiteren Einsatz von Virtual-Reality-Anwendungen in allen Bereichen der Wissenschaft und Technik zu entwickeln. Der Ansatz ist der Aufbau eines Virtual-Reality-Authoring-Tools, eines Softwarepakets zur Vereinfachung der Erstellung von virtuellen Welten und der Implementierung dieser Welten in fortschrittlichen immersiven Hardware-Umgebungen wie verteilten Visualisierungssystemen. Ein weiteres Ziel dieser Arbeit ist es, das intuitive Authoring von semantischen Elementen in virtuellen Welten zu ermöglichen. Dies sollte die Erstellung von virtuellen Inhalten und die Interaktionsmöglichkeiten revolutionieren. Intelligente immersive Umgebungen sind der SchlĂŒssel, um das Lernen und Trainieren in virtuellen Welten zu fördern, Prozesse zu planen und zu ĂŒberwachen oder den Weg fĂŒr völlig neue Interaktionsparadigmen zu ebnen
    • 

    corecore