7 research outputs found

    Glossy Probe Reprojection for Interactive Global Illumination

    Get PDF
    International audienceRecent rendering advances dramatically reduce the cost of global illumination. But even with hardware acceleration, complex light paths with multiple glossy interactions are still expensive; our new algorithm stores these paths in precomputed light probes and reprojects them at runtime to provide interactivity. Combined with traditional light maps for diffuse lighting our approach interactively renders all light paths in static scenes with opaque objects. Naively reprojecting probes with glossy lighting is memory-intensive, requires efficient access to the correctly reflected radiance, and exhibits problems at occlusion boundaries in glossy reflections. Our solution addresses all these issues. To minimize memory, we introduce an adaptive light probe parameterization that allocates increased resolution for shinier surfaces and regions of higher geometric complexity. To efficiently sample glossy paths, our novel gathering algorithm reprojects probe texels in a view-dependent manner using efficient reflection estimation and a fast rasterization-based search. Naive probe reprojection often sharpens glossy reflections at occlusion boundaries, due to changes in parallax. To avoid this, we split the convolution induced by the BRDF into two steps: we precompute probes using a lower material roughness and apply an adaptive bilateral filter at runtime to reproduce the original surface roughness. Combining these elements, our algorithm interactively renders complex scenes while fitting in the memory, bandwidth, and computation constraints of current hardware

    Gameworld in order to Develop a Horror Video Game in Unreal Engine 4

    Get PDF
    Treball final de Grau en Disseny i Desenvolupament de Videojocs. Codi: VJ1241. Curs acadèmic: 2020/2021The hereby document represents the Final Report for a bachelor’s thesis on Video Game Design and Development. The following work consists of the Design and Development of a Realistic 3D Gameworld in order to Develop a Horror Video game in Unreal Engine 4. The project consists of the development of a horror video game based on the design of artificial intelligence, that would be able to perform differently depending on the player’s decisions and experiences through the game, operating with automatic learning. The game will be developed in Unreal Engine 4 for PC; The player will be placed inside a closed house in which his skills will have to maintain the artificial intelligence alive. Personally, I will focus on explaining the part that I have worked on: The development of the environment, 3D art, materials, textures and lightning

    Material aging for game environments

    Get PDF
    We propose a software and work-flow to create 3D model imperfections and aging effects while adding very little overhead to the rendering time of the models. Specifically, we implement a tool that adds all these effects into the physically based rendering (PBR) textures of the input model

    Faster data structures and graphics hardware techniques for high performance rendering

    Get PDF
    Computer generated imagery is used in a wide range of disciplines, each with different requirements. As an example, real-time applications such as computer games have completely different restrictions and demands than offline rendering of feature films. A game has to render quickly using only limited resources, yet present visually adequate images. Film and visual effects rendering may not have strict time requirements but are still required to render efficiently utilizing huge render systems with hundreds or even thousands of CPU cores. In real-time rendering, with limited time and hardware resources, it is always important to produce as high rendering quality as possible given the constraints available. The first paper in this thesis presents an analytical hardware model together with a feed-back system that guarantees the highest level of image quality subject to a limited time budget. As graphics processing units grow more powerful, power consumption becomes a critical issue. Smaller handheld devices have only a limited source of energy, their battery, and both small devices and high-end hardware are required to minimize energy consumption not to overheat. The second paper presents experiments and analysis which consider power usage across a range of real-time rendering algorithms and shadow algorithms executed on high-end, integrated and handheld hardware. Computing accurate reflections and refractions effects has long been considered available only in offline rendering where time isn’t a constraint. The third paper presents a hybrid approach, utilizing the speed of real-time rendering algorithms and hardware with the quality of offline methods to render high quality reflections and refractions in real-time. The fourth and fifth paper present improvements in construction time and quality of Bounding Volume Hierarchies (BVH). Building BVHs faster reduces rendering time in offline rendering and brings ray tracing a step closer towards a feasible real-time approach. Bonsai, presented in the fourth paper, constructs BVHs on CPUs faster than contemporary competing algorithms and produces BVHs of a very high quality. Following Bonsai, the fifth paper presents an algorithm that refines BVH construction by allowing triangles to be split. Although splitting triangles increases construction time, it generally allows for higher quality BVHs. The fifth paper introduces a triangle splitting BVH construction approach that builds BVHs with quality on a par with an earlier high quality splitting algorithm. However, the method presented in paper five is several times faster in construction time

    The delta radiance field

    Get PDF
    The wide availability of mobile devices capable of computing high fidelity graphics in real-time has sparked a renewed interest in the development and research of Augmented Reality applications. Within the large spectrum of mixed real and virtual elements one specific area is dedicated to produce realistic augmentations with the aim of presenting virtual copies of real existing objects or soon to be produced products. Surprisingly though, the current state of this area leaves much to be desired: Augmenting objects in current systems are often presented without any reconstructed lighting whatsoever and therefore transfer an impression of being glued over a camera image rather than augmenting reality. In light of the advances in the movie industry, which has handled cases of mixed realities from one extreme end to another, it is a legitimate question to ask why such advances did not fully reflect onto Augmented Reality simulations as well. Generally understood to be real-time applications which reconstruct the spatial relation of real world elements and virtual objects, Augmented Reality has to deal with several uncertainties. Among them, unknown illumination and real scene conditions are the most important. Any kind of reconstruction of real world properties in an ad-hoc manner must likewise be incorporated into an algorithm responsible for shading virtual objects and transferring virtual light to real surfaces in an ad-hoc fashion. The immersiveness of an Augmented Reality simulation is, next to its realism and accuracy, primarily dependent on its responsiveness. Any computation affecting the final image must be computed in real-time. This condition rules out many of the methods used for movie production. The remaining real-time options face three problems: The shading of virtual surfaces under real natural illumination, the relighting of real surfaces according to the change in illumination due to the introduction of a new object into a scene, and the believable global interaction of real and virtual light. This dissertation presents contributions to answer the problems at hand. Current state-of-the-art methods build on Differential Rendering techniques to fuse global illumination algorithms into AR environments. This simple approach has a computationally costly downside, which limits the options for believable light transfer even further. This dissertation explores new shading and relighting algorithms built on a mathematical foundation replacing Differential Rendering. The result not only presents a more efficient competitor to the current state-of-the-art in global illumination relighting, but also advances the field with the ability to simulate effects which have not been demonstrated by contemporary publications until now

    Ray Tracing Gems

    Get PDF
    This book is a must-have for anyone serious about rendering in real time. With the announcement of new ray tracing APIs and hardware to support them, developers can easily create real-time applications with ray tracing as a core component. As ray tracing on the GPU becomes faster, it will play a more central role in real-time rendering. Ray Tracing Gems provides key building blocks for developers of games, architectural applications, visualizations, and more. Experts in rendering share their knowledge by explaining everything from nitty-gritty techniques that will improve any ray tracer to mastery of the new capabilities of current and future hardware. What you'll learn: The latest ray tracing techniques for developing real-time applications in multiple domains Guidance, advice, and best practices for rendering applications with Microsoft DirectX Raytracing (DXR) How to implement high-performance graphics for interactive visualizations, games, simulations, and more Who this book is for: Developers who are looking to leverage the latest APIs and GPU technology for real-time rendering and ray tracing Students looking to learn about best practices in these areas Enthusiasts who want to understand and experiment with their new GPU

    Immersive Automotive Stereo Vision

    Get PDF
    Kürzlich wurde das erste In-Car Augmented Reality (AR) System eingeführt. Das System beinhaltet das Rendern von verschiedenen 3D Objekten auf einem Live-Video, welches auf einem Zentraldisplay in der Mittelkonsole des Fahrzeuges angezeigt wird. Ziel dieser Arbeit ist es ein System zu entwickeln, welches nicht nur 2D-Videos augmentieren kann, sondern eine 3D-Rekonstruktion der aktuellen Fahrzeugumgebung erstellen kann. Dies ermöglicht eine Vielzahl von verschiedenen Anwendungen, u.a. die Anzeige dieses 3D-Scans auf einem Head-mounted Display (HMD) als Teil einer Mixed Reality (MR) Anwendung. Eine MR-Anwendung bedarf einer überzeugenden und immersiven Darstellung der Umgebung mit einer hohen Renderfrequenz. Wir beschränken uns auf eine einzelne Front-Stereokamera, welche vorne am Auto verbaut oder montiert ist, um diese Aufgabe zu bewältigen. Hierzu fusionieren wir die Stereomessungen temporär. Zuerst analysieren wir von Grund auf die Effekte der temporalen Stereofusion. Wir schätzen die erreichbare Genauigkeit ab und zeigen Einschränkungen der temporalen Fusion und unseren Annahmen auf. Wir leiten außerdem ein 1D Extended Information Filter und ein 3D Extended Kalman Filter her, um Stereomessungen temporär zu vereinen. Die Filter verbesserten den Tiefenfehler in Simulationen wesentlich. Die Ergebnisse der Analyse integrieren wir in ein neuartiges 3D-Rekonstruktions- Framework, bei dem jeder Punkt mit einem Filter modelliert wird. Das sog. “Warping” von Pixeln von einem Bild zu einem anderen Bild ermöglicht die temporäre Fusion von Messungen nach einem Clustering-Schritt, welcher uns erlaubt verschiedene Tiefenebenen pro Pixel gesondert zu betrachten. Das Framework funktioniert als punkt-basierte Rekonstruktion oder alternativ als mesh-basierte Erweiterung. Hierfür triangulieren wir Tiefenbilder, um die 3DSzene nur mit RGB- und Tiefenbildern als Input auf der GPU zu rendern. Wir können die Eigenschaften von urbanen Szenen und der Kamerabewegung ausnutzen, um Pixel zu identifizieren und zu rendern, welche nicht mehr in zukünftigen Frames beobachtet werden. Das ermöglicht uns diesen Teil der Szene in der größten beobachteten Auflösung zu rekonstruieren. Solche Randpixel formen einen Schlauch (“Tube”) über mehrere Frames, weshalb wir dieses Mesh als Tube Mesh bezeichnen. Unser Framework erlaubt es uns auch die rechenintensiven Filter-Propagationen komplett auf die GPU auszulagern. DesWeiteren demonstrieren wir ein Verfahren, um einen vollen, dynamischen, virtuellen Himmel mithilfe der gleichen Kamera zu erstellen, welcher ergänzend zu der 3D-Szenenrekonstruktion als Hintergrund gezeigt werden kann. Wir evaluieren unsere Methoden gegen andere Verfahren in einem umfangreichen Benchmark auf dem populären “KITTI Visual Odometry”-Datensatz und dem synthethischen SYNTHIA-Datensatz. Neben Stereofehlern im Bild vergleichen wir auch die Performanz der Verfahren für die Rekonstruktion von bestimmten Strukturen in den Referenz-Tiefenbildern, sowie ihre Fähigkeit die Erscheinung der 3D-Szene aus unterschiedlichen Blickwinkeln vorherzusagen auf dem SYNTHIA-Datensatz. Unsere Methode zeigt signifikante Verbesserungen des Disparitätsfehlers sowie des Bildfehlers aus unterschiedlichen Blickwinkeln. Außerdem erzielen wir eine so hohe Rendergeschwindigkeit, dass die Anforderung der Bildwiederholrate von modernen HMDs erfüllt wird. Zum Schluss zeigen wir Herausforderungen in der Evaluation auf, untersuchen die Auswirkungen des Weglassens einzelner Komponenten unseres Frameworks und schließen mit einer qualitativen Demonstration von unterschiedlichen Datensätzen ab, inklusive der Diskussion von Fehlerfällen.Recently, the first in-car augmented reality (AR) system has been introduced to the market. It features various virtual 3D objects drawn on top of a 2D live video feed, which is displayed on a central display inside the vehicle. Our goal with this thesis is to develop an approach that allows to not only augment a 2D video, but to reconstruct a 3D scene of the surrounding driving environment of the vehicle. This opens up various possibilities including the display of this 3D scan on a head-mounted display (HMD) as part of a Mixed Reality (MR) application, which requires a convincing and immersive visualization of the surroundings with high rendering speed. To accomplish this task, we limit ourselves to the use of a single front-mounted stereo camera on a vehicle and fuse stereo measurements temporally. First, we analyze the effects of temporal stereo fusion thoroughly. We estimate the theoretically achievable accuracy and highlight limitations of temporal fusion and our assumptions. We also derive a 1D extended information filter and a 3D extended Kalman filter to fuse measurements temporally, which substantially improves the depth error in our simulations. We integrate these results in a novel dense 3D reconstruction framework, which models each point as a probabilistic filter. Projecting 3D points to the newest image allows us to fuse measurements temporally after a clustering stage, which also gives us the ability to handle multiple depth layers per pixel. The 3D reconstruction framework is point-based, but it also has a mesh-based extension. For that, we leverage a novel depth image triangulation method to render the scene on the GPU using only RGB and depth images as input. We can exploit the nature of urban scenery and the vehicle movement by first identifying and then rendering pixels of the previous stereo camera frame that are no longer seen in the current frame. These pixels at the previous image border form a tube over multiple frames, which we call a tube mesh, and have the highest possible observable resolution. We are also able to offload intensive filter propagation computations completely to the GPU. Furthermore, we demonstrate a way to create a dense, dynamic virtual sky background from the same camera to accompany our reconstructed 3D scene. We evaluate our method against other approaches in an extensive benchmark on the popular KITTI visual odometry dataset and on the synthetic SYNTHIA dataset. Besides stereo error metrics in image space, we also compare how the approaches perform regarding the available depth structure in the reference depth image and in their ability to predict the appearance of the scene from different viewing angles on SYNTHIA. Our method shows significant improvements in terms of disparity and view prediction errors. We also achieve such a high rendering speed that we can fulfill the framerate requirements of modern HMDs. Finally, we highlight challenges in the evaluation, perform ablation studies of our framework and conclude with a qualitative showcase on different datasets including the discussion of failure cases
    corecore