26 research outputs found

    RTSDF: Generating Signed Distance Fields in Real Time for Soft Shadow Rendering

    Full text link
    Signed Distance Fields (SDFs) for surface representation are commonly generated offline and subsequently loaded into interactive applications like games. Since they are not updated every frame, they only provide a rigid surface representation. While there are methods to generate them quickly on GPU, the efficiency of these approaches is limited at high resolutions. This paper showcases a novel technique that combines jump flooding and ray tracing to generate approximate SDFs in real-time for soft shadow approximation, achieving prominent shadow penumbras while maintaining interactive frame rates

    Frameless Rendering

    Get PDF
    Cílem této práce je vytvoření jednoduchého raytraceru s pomocí knihovny IPP, který bude využívat techniku bezsnímkového renderování. První část práce je zaměřena na metodu sledování paprsku. V další části je rozebrána technika bezsnímkového renderování a její adaptivní verze se zaměřením na adaptivní vzorkování. Dále je zde popsána knihovna IPP a implementace jednoduchého raytraceru s pomocí této knihovny. Poslední část práce vyhodnocuje rychlost a kvalitu zobrazení implementovaného systému.The aim of this work is to create a simple raytracer with IPP library, which will use the frameless rendering technique. The first part of this work focuses on the raytracing method. The next part analyzes the frameless rendering technique and its adaptive version with focus on adaptive sampling. Third part describes the IPP library and implementation of a simple raytracer using this library. The last part evaluates the speed and rendering quality of the implemented system.

    Doctor of Philosophy

    Get PDF
    dissertationBalancing the trade off between the spatial and temporal quality of interactive computer graphics imagery is one of the fundamental design challenges in the construction of rendering systems. Inexpensive interactive rendering hardware may deliver a high level of temporal performance if the level of spatial image quality is sufficiently constrained. In these cases, the spatial fidelity level is an independent parameter of the system and temporal performance is a dependent variable. The spatial quality parameter is selected for the system by the designer based on the anticipated graphics workload. Interactive ray tracing is one example; the algorithm is often selected due to its ability to deliver a high level of spatial fidelity, and the relatively lower level of temporal performance isreadily accepted. This dissertation proposes an algorithm to perform fine-grained adjustments to the trade off between the spatial quality of images produced by an interactive renderer, and the temporal performance or quality of the rendered image sequence. The approach first determines the minimum amount of sampling work necessary to achieve a certain fidelity level, and then allows the surplus capacity to be directed towards spatial or temporal fidelity improvement. The algorithm consists of an efficient parallel spatial and temporal adaptive rendering mechanism and a control optimization problem which adjusts the sampling rate based on a characterization of the rendered imagery and constraints on the capacity of the rendering system

    Hybrid image-/model-based gaze-contingent rendering

    Full text link

    Foveated real-time ray tracing for head-mounted displays

    Get PDF
    Head-mounted displays with dense pixel arrays used for virtual reality applications require high frame rates and low latency rendering. This forms a challenging use case for any rendering approach. In addition to its ability of generating realistic images, ray tracing offers a number of distinct advantages, but has been held back mainly by its performance. In this paper, we present an approach that significantly improves image generation performance of ray tracing. This is done by combining foveated rendering based on eye tracking with reprojection rendering using previous frames in order to drastically reduce the number of new image samples per frame. To reproject samples a coarse geometry is reconstructed from a G-Buffer. Possible errors introduced by this reprojection as well as parts that are critical to the perception are scheduled for resampling. Additionally, a coarse color buffer is used to provide an initial image, refined smoothly by more samples were needed. Evaluations and user tests show that our method achieves real-time frame rates, while visual differences compared to fully rendered images are hardly perceivable. As a result, we can ray trace non-trivial static scenes for the Oculus DK2 HMD at 1182 × 1464 per eye within the the VSync limits without perceived visual differences.We would like to thank NVIDIA for providing us with two Quadro K6000 graphics cards as well as Intel Visual Computing Institute, the European Union (EU) for the co-funding as part of the Dreamspace and the FIWARE projects and the German Federal Ministry for Economic Affairs and Energy (BMWi) for funding the MATEDIS ZIM-project (grant no KF2644109)

    Automatische Erstellung von Objekthierarchien zum Ray Tracing von dynamischen Szenen

    Get PDF
    Ray tracing acceleration techniques most often consider only static scenes, neglecting the processing time needed to build the acceleration data structure. With the development of interactive ray tracing systems, this reconstruction time becomes a serious bottleneck if concerned with dynamic scenes. In this paper, we describe two strategies for effcient updating of bounding volume hierarchies (BVH) for scenarios with arbitrarily moving objects. The first exploits spatial locality in the object distribution for faster reinsertion of the moved objects. The second allows insertion and deletion of objects at almost constant time by using a hybrid system, which combines benefits from both spatial subdivision and BVHs. Depending on the number of moving objects, our algorithms adjust a dynamic BVH six to one hundred times faster than it would take to rebuild the complete hierarchy, while rendering times of the resulting hierarchy remain almost untouched.Beschleunigungstechniken für Ray Tracing (Strahlverfolgung) sind meist lediglich für statische Szenen ausgelegt, und wenig Aufmerksamkeit wird auf die Zeit gelegt, welche zur Erstellung der Beschleunigungsdatenstruktur benötigt wird. Mit der Entwicklung interaktiver Ray Tracing Systeme wird dieser Rekonstruktionszeit jedoch zum Flaschenhals, falls man mit dynamischen Szenen arbeitet. In diesem Report werden zwei Strategien für eine effiziente Aktualisierung von Bounding Volume Hierarchien vorgestellt, ausgelegt auf Szenarien mit beliebig bewegten Objekten. Die erste nutzt räumliche Lokalitäten in der Objektverteilung um den Einfügeprozess für bewegten Objekte zu verkürzen. Die zweite Methode erlaubt das Einfügen und Löschen von Objekten in nahezu konstanter Zeit, indem ein hybrides System verwendet wird, welches die Vorteile spatialer Datenstrukturen und Bounding Volume Hierarchien miteinander verknüpft. Abhängig von der Anzahl an bewegten Objekten, können unsere Algorithmen eine bestehende Bounding Volume Hierarchie sech bis hundertmal so schnell anpassen, wie ein kompletter Neuaufbau benötigen würde. Die benötigte Zeit zum Rendern der Szene bleibt jedoch nahezu unberührt im Vergleich

    High Performance Stereoscopic Ray Tracing on the GPU

    Get PDF
    corecore