102 research outputs found

    Advanced 3D Rendering : Adaptive Caustic Maps with GPGPU

    Get PDF
    Graphics researchers have long studied real-time caustic rendering. The state-of-the-art technique Adaptive Caustic Maps provides a novel way to avoid densely sampling photons during a rasterization pass, and instead adaptively emits photons using a deferred shading pass. In this project, we present a variation of adaptive caustic maps for real-time rendering of caustics. Our algorithm is conceptually similar to Adaptive Caustic Maps but has a different implementation based on the general-purpose computing pipeline provided by OpenGL version 4.3. Our approach accelerates the photon splitting process using compute shaders and bypasses various other performance overheads, ultimately speeding up photon generation considerably

    Adaptive Spectral Mapping for Real-Time Dispersive Refraction

    Get PDF
    Spectral rendering, or the synthesis of images by taking into account the wavelengths of light, allows effects otherwise impossible with other methods. One of these effects is dispersion, the phenomenon that creates a rainbow when white light shines through a prism. Spectral rendering has previously remained in the realm of off-line rendering (with a few exceptions) due to the extensive computation required to keep track of individual light wavelengths. Caustics, the focusing and de-focusing of light through a refractive medium, can be interpreted as a special case of dispersion where all the wavelengths travel together. This thesis extends Adaptive Caustic Mapping, a previously proposed caustics mapping algorithm, to handle spectral dispersion. Because ACM can display caustics in real-time, it is quite amenable to be extended to handle the more general case of dispersion. A method is presented that runs in screen-space and is fast enough to display plausible dispersion phenomena in real-time at interactive frame rates

    The Iray Light Transport Simulation and Rendering System

    Full text link
    While ray tracing has become increasingly common and path tracing is well understood by now, a major challenge lies in crafting an easy-to-use and efficient system implementing these technologies. Following a purely physically-based paradigm while still allowing for artistic workflows, the Iray light transport simulation and rendering system allows for rendering complex scenes by the push of a button and thus makes accurate light transport simulation widely available. In this document we discuss the challenges and implementation choices that follow from our primary design decisions, demonstrating that such a rendering system can be made a practical, scalable, and efficient real-world application that has been adopted by various companies across many fields and is in use by many industry professionals today

    Towards Fully Dynamic Surface Illumination in Real-Time Rendering using Acceleration Data Structures

    Get PDF
    The improvements in GPU hardware, including hardware-accelerated ray tracing, and the push for fully dynamic realistic-looking video games, has been driving more research in the use of ray tracing in real-time applications. The work described in this thesis covers multiple aspects such as optimisations, adapting existing offline methods to real-time constraints, and adding effects which were hard to simulate without the new hardware, all working towards a fully dynamic surface illumination rendering in real-time.Our first main area of research concerns photon-based techniques, commonly used to render caustics. As many photons can be required for a good coverage of the scene, an efficient approach for detecting which ones contribute to a pixel is essential. We improve that process by adapting and extending an existing acceleration data structure; if performance is paramount, we present an approximation which trades off some quality for a 2–3× improvement in rendering time. The tracing of all the photons, and especially when long paths are needed, had become the highest cost. As most paths do not change from frame to frame, we introduce a validation procedure allowing the reuse of as many as possible, even in the presence of dynamic lights and objects. Previous algorithms for associating pixels and photons do not robustly handle specular materials, so we designed an approach leveraging ray tracing hardware to allow for caustics to be visible in mirrors or behind transparent objects.Our second research focus switches from a light-based perspective to a camera-based one, to improve the picking of light sources when shading: photon-based techniques are wonderful for caustics, but not as efficient for direct lighting estimations. When a scene has thousands of lights, only a handful can be evaluated at any given pixel due to time constraints. Current selection methods in video games are fast but at the cost of introducing bias. By adapting an acceleration data structure from offline rendering that stochastically chooses a light source based on its importance, we provide unbiased direct lighting evaluation at about 30 fps. To support dynamic scenes, we organise it in a two-level system making it possible to only update the parts containing moving lights, and in a more efficient way.We worked on top of the new ray tracing hardware to handle lighting situations that previously proved too challenging, and presented optimisations relevant for future algorithms in that space. These contributions will help in reducing some artistic constraints while designing new virtual scenes for real-time applications

    Visually pleasing real-time global illumination rendering for fully-dynamic scenes

    Get PDF
    Global illumination (GI) rendering plays a crucial role in the photo-realistic rendering of virtual scenes. With the rapid development of graphics hardware, GI has become increasingly attractive even for real-time applications nowadays. However, the computation of physically-correct global illumination is time-consuming and cannot achieve real-time, or even interactive performance. Although the realtime GI is possible using a solution based on precomputation, such a solution cannot deal with fully-dynamic scenes. This dissertation focuses on solving these problems by introducing visually pleasing real-time global illumination rendering for fully-dynamic scenes. To this end, we develop a set of novel algorithms and techniques for rendering global illumination effects using the graphics hardware. All these algorithms not only result in real-time or interactive performance, but also generate comparable quality to the previous works in off-line rendering. First, we present a novel implicit visibility technique to circumvent expensive visibility queries in hierarchical radiosity by evaluating the visibility implicitly. Thereafter, we focus on rendering visually plausible soft shadows, which is the most important GI effect caused by the visibility determination. Based on the pre-filtering shadowmapping theory, wesuccessively propose two real-time soft shadow mapping methods: "convolution soft shadow mapping" (CSSM) and "variance soft shadow mapping" (VSSM). Furthermore, we successfully apply our CSSM method in computing the shadow effects for indirect lighting. Finally, to explore the GI rendering in participating media, we investigate a novel technique to interactively render volume caustics in the single-scattering participating media.Das Rendern globaler Beleuchtung ist für die fotorealistische Darstellung virtueller Szenen von entscheidender Bedeutung. Dank der rapiden Entwicklung der Grafik-Hardware wird die globale Beleuchtung heutzutage sogar für Echtzeitanwendungen immer attraktiver. Trotz allem ist die Berechnung physikalisch korrekter globaler Beleuchtung zeitintensiv und interaktive Laufzeiten können mit "standard Hardware" noch nicht erzielt werden. Obwohl das Rendering auf der Grundlage von Vorberechnungen in Echtzeit möglich ist, kann ein solcher Ansatz nicht auf voll-dynamische Szenen angewendet werden. Diese Dissertation zielt darauf ab, das Problem der globalen Beleuchtungsberechnung durch Einführung von neuen Techniken für voll-dynamische Szenen in Echtzeit zu lösen. Dazu stellen wir eine Reihe neuer Algorithmen vor, die die Effekte der globaler Beleuchtung auf der Grafik-Hardware berechnen. All diese Algorithmen erzielen nicht nur Echtzeit bzw. interaktive Laufzeiten sondern liefern auch eine Qualität, die mit bisherigen offline Methoden vergleichbar ist. Zunächst präsentieren wir eine neue Technik zur Berechnung impliziter Sichtbarkeit, die aufwändige Sichbarkeitstests in hierarchischen Radiosity-Datenstrukturen vermeidet. Anschliessend stellen wir eine Methode vor, die weiche Schatten, ein wichtiger Effekt für die globale Beleuchtung, in Echtzeit berechnet. Auf der Grundlage der Theorie über vorgefilterten Schattenwurf, zeigen wir nacheinander zwei Echtzeitmethoden zur Berechnung weicher Schattenwürfe: "Convolution Soft Shadow Mapping" (CSSM) und "Variance Soft Shadow Mapping" (VSSM). Darüber hinaus wenden wir unsere CSSM-Methode auch erfolgreich auf den Schatteneffekt in der indirekten Beleuchtung an. Abschliessend präsentieren wir eine neue Methode zum interaktiven Rendern von Volumen-Kaustiken in einfach streuenden, halbtransparenten Medien

    Gravitationally Lensed HI with MeerKAT

    Full text link
    The SKA era is set to revolutionize our understanding of neutral hydrogen (HI) in individual galaxies out to redshifts of z~0.8; and in the z > 6 intergalactic medium through the detection and imaging of cosmic reionization. Direct HI number density constraints will, nonetheless, remain relatively weak out to cosmic noon (z~2) - the epoch of peak star formation and black hole accretion - and beyond. However, as was demonstrated from the 1990s with molecular line observations, this can be overcome by utilising the natural amplification afforded by strong gravitational lensing, which results in an effective increase in integration time by the square of the total magnification (\mu^2) for an unresolved source. Here we outline how a dedicated lensed HI survey will leverage MeerKAT's high sensitivity, frequency coverage, large instantaneous bandwidth, and high dynamic range imaging to enable a lasting legacy of high-redshift HI emission detections well into the SKA era. This survey will not only provide high-impact, rapid-turnaround MeerKAT science commissioning results, but also unveil Milky Way-like systems towards cosmic noon which is not possible with any other SKA precursors/pathfinders. An ambitious lensed HI survey will therefore make a significant impact from MeerKAT commissioning all the way through to the full SKA era, and provide a more complete picture of the HI history of the Universe.Comment: 15 pages, 3 figures, accepted for publication, Proceedings of Science, workshop on "MeerKAT Science: On the Pathway to the SKA", held in Stellenbosch 25-27 May 2016. Comments welcom

    A fast framework construction and visualization method for particle-based fluid

    Get PDF
    © 2017, The Author(s). Fast and vivid fluid simulation and visualization is a challenge topic of study in recent years. Particle-based simulation method has been widely used in the art animation modeling and multimedia field. However, the requirements of huge numerical calculation and high quality of visualization usually result in a poor computing efficiency. In this work, in order to improve those issues, we present a fast framework for 3D fluid fast constructing and visualization which parallelizes the fluid algorithm based on the GPU computing framework and designs a direct surface visualization method for particle-based fluid data such as WCSPH, IISPH, and PCISPH. Considering on conventional polygonization or adaptive mesh methods may incur high computing costs and detail losses, an improved particle-based method is provided for real-time fluid surface rendering with the screen-space technology and the utilities of the modern graphics hardware to achieve the high performance rendering; meanwhile, it effectively protects fluid details. Furthermore, to realize the fast construction of scenes, an optimized design of parallel framework and interface is also discussed in our paper. Our method is convenient to enforce, and the results demonstrate a significant improvement in the performance and efficiency by being compared with several examples

    Developing serious games for cultural heritage: a state-of-the-art review

    Get PDF
    Although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology. As a result, the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented

    Faster data structures and graphics hardware techniques for high performance rendering

    Get PDF
    Computer generated imagery is used in a wide range of disciplines, each with different requirements. As an example, real-time applications such as computer games have completely different restrictions and demands than offline rendering of feature films. A game has to render quickly using only limited resources, yet present visually adequate images. Film and visual effects rendering may not have strict time requirements but are still required to render efficiently utilizing huge render systems with hundreds or even thousands of CPU cores. In real-time rendering, with limited time and hardware resources, it is always important to produce as high rendering quality as possible given the constraints available. The first paper in this thesis presents an analytical hardware model together with a feed-back system that guarantees the highest level of image quality subject to a limited time budget. As graphics processing units grow more powerful, power consumption becomes a critical issue. Smaller handheld devices have only a limited source of energy, their battery, and both small devices and high-end hardware are required to minimize energy consumption not to overheat. The second paper presents experiments and analysis which consider power usage across a range of real-time rendering algorithms and shadow algorithms executed on high-end, integrated and handheld hardware. Computing accurate reflections and refractions effects has long been considered available only in offline rendering where time isn’t a constraint. The third paper presents a hybrid approach, utilizing the speed of real-time rendering algorithms and hardware with the quality of offline methods to render high quality reflections and refractions in real-time. The fourth and fifth paper present improvements in construction time and quality of Bounding Volume Hierarchies (BVH). Building BVHs faster reduces rendering time in offline rendering and brings ray tracing a step closer towards a feasible real-time approach. Bonsai, presented in the fourth paper, constructs BVHs on CPUs faster than contemporary competing algorithms and produces BVHs of a very high quality. Following Bonsai, the fifth paper presents an algorithm that refines BVH construction by allowing triangles to be split. Although splitting triangles increases construction time, it generally allows for higher quality BVHs. The fifth paper introduces a triangle splitting BVH construction approach that builds BVHs with quality on a par with an earlier high quality splitting algorithm. However, the method presented in paper five is several times faster in construction time
    corecore