532 research outputs found

    Efficient Many-Light Rendering of Scenes with Participating Media

    Get PDF
    We present several approaches based on virtual lights that aim at capturing the light transport without compromising quality, and while preserving the elegance and efficiency of many-light rendering. By reformulating the integration scheme, we obtain two numerically efficient techniques; one tailored specifically for interactive, high-quality lighting on surfaces, and one for handling scenes with participating media

    Towards interactive global illumination effects via sequential Monte Carlo adaptation

    Get PDF
    Journal ArticleThis paper presents a novel method that effectively combines both control variates and importance sampling in a sequential Monte Carlo context while handling general single-bounce global illumination effects. The radiance estimates computed during the rendering process are cached in an adaptive per-pixel structure that defines dynamic predicate functions for both variance reduction techniques and guarantees well-behaved PDFs, yielding continually increasing efficiencies thanks to a marginal computational overhead

    Ray Tracing Gems

    Get PDF
    This book is a must-have for anyone serious about rendering in real time. With the announcement of new ray tracing APIs and hardware to support them, developers can easily create real-time applications with ray tracing as a core component. As ray tracing on the GPU becomes faster, it will play a more central role in real-time rendering. Ray Tracing Gems provides key building blocks for developers of games, architectural applications, visualizations, and more. Experts in rendering share their knowledge by explaining everything from nitty-gritty techniques that will improve any ray tracer to mastery of the new capabilities of current and future hardware. What you'll learn: The latest ray tracing techniques for developing real-time applications in multiple domains Guidance, advice, and best practices for rendering applications with Microsoft DirectX Raytracing (DXR) How to implement high-performance graphics for interactive visualizations, games, simulations, and more Who this book is for: Developers who are looking to leverage the latest APIs and GPU technology for real-time rendering and ray tracing Students looking to learn about best practices in these areas Enthusiasts who want to understand and experiment with their new GPU

    Image based analysis of visibility in smoke laden environments

    Get PDF
    This study investigates visibility in a smoke laden environment. For many years, researchers and engineers in fire safety have criticized the inadequacy of existing theory in describing the effects such as colour, viewing angle, environmental lighting etc. on the visibility of an emergency sign. In the current study, the author has raised the fundamental question on the concept of visibility and how it should be measured in fire safety engineering and tried to address the problem by redefining visibility based on the perceived image of a target sign. New algorithms have been created during this study to utilise modern hardware and software technology in the simulation of human perceived image of object in both experiment and computer modelling. Unlike the traditional threshold of visual distance, visibility in the current study has been defined as a continuous function changing from clearly discemable to completely invisible. It allows the comparison of visibility under various conditions, not just limited to the threshold. Current experiment has revealed that different conditions may results in the same visual threshold but follow very different path on the way leading to the threshold. The new definition of visibility has made the quantification of visibility in the pre-threshold conditions possible. Such quantification can help to improve the performance of fire evacuation since most evacuees will experience the pre-threshold condition. With current measurement of visibility, all the influential factors such as colour, viewing angle etc. can be tested in experiment and simulated in numerical model. Based on the newly introduced definition of visibility, a set of experiments have been carried output in a purposed built smoke tunnel. Digital camera images of various illuminated signs were taken under different illumination, colour and smoke conditions. Using an algorithm developed by the author in this study, the digital camera images were converted into simulated human perceived images. The visibility of a target sign is measured against the quality of its image acquired. Conclusions have been drawn by comparing visibility under different conditions. One of them is that signs illuminated with red and green lights have the similar visibility that is far better than that with blue light. It is the first time this seemingly obvious conclusion has been quantified. In the simulation of visibility in participating media, the author has introduced an algorithm that combines irradiance catching in 3D space with Monte Carlo ray tracing. It can calculate the distribution of scattered radiation with good accuracy without the high cost typically related to zonal method and the limitations in discrete ordinate method. The algorithm has been combined with a two pass solution method to produce high resolution images without introducing excessive number of rays from the light source. The convergence of the iterative solution procedure implemented has been theoretically proven. The accuracy of the model is demonstrated by comparing with the analytical solution of a point radiant source in 3D space. Further validation of the simulation model has been carried out by comparing the model prediction with the data from the smoke tunnel experiments. The output of the simulation model has been presented in the form of an innovative floor map of visibility (FMV). It helps the fire safety designer to identify regions of poor visibility in a glance and will prove to be a very useful tool in performance based fire safety design

    Image based analysis of visibility in smoke laden environments

    Get PDF
    This study investigates visibility in a smoke laden environment. For many years, researchers and engineers in fire safety have criticized the inadequacy of existing theory in describing the effects such as colour, viewing angle, environmental lighting etc. on the visibility of an emergency sign. In the current study, the author has raised the fundamental question on the concept of visibility and how it should be measured in fire safety engineering and tried to address the problem by redefining visibility based on the perceived image of a target sign. New algorithms have been created during this study to utilise modern hardware and software technology in the simulation of human perceived image of object in both experiment and computer modelling. Unlike the traditional threshold of visual distance, visibility in the current study has been defined as a continuous function changing from clearly discemable to completely invisible. It allows the comparison of visibility under various conditions, not just limited to the threshold. Current experiment has revealed that different conditions may results in the same visual threshold but follow very different path on the way leading to the threshold. The new definition of visibility has made the quantification of visibility in the pre-threshold conditions possible. Such quantification can help to improve the performance of fire evacuation since most evacuees will experience the pre-threshold condition. With current measurement of visibility, all the influential factors such as colour, viewing angle etc. can be tested in experiment and simulated in numerical model.Based on the newly introduced definition of visibility, a set of experiments have been carried output in a purposed built smoke tunnel. Digital camera images of various illuminated signs were taken under different illumination, colour and smoke conditions. Using an algorithm developed by the author in this study, the digital camera images were converted into simulated human perceived images. The visibility of a target sign is measured against the quality of its image acquired. Conclusions have been drawn by comparing visibility under different conditions. One of them is that signs illuminated with red and green lights have the similar visibility that is far better than that with blue light. It is the first time this seemingly obvious conclusion has been quantified.In the simulation of visibility in participating media, the author has introduced an algorithm that combines irradiance catching in 3D space with Monte Carlo ray tracing. It can calculate the distribution of scattered radiation with good accuracy without the high cost typically related to zonal method and the limitations in discrete ordinate method. The algorithm has been combined with a two pass solution method to produce high resolution images without introducing excessive number of rays from the light source. The convergence of the iterative solution procedure implemented has been theoretically proven. The accuracy of the model is demonstrated by comparing with the analytical solution of a point radiant source in 3D space. Further validation of the simulation model has been carried out by comparing the model prediction with the data from the smoke tunnel experiments.The output of the simulation model has been presented in the form of an innovative floor map of visibility (FMV). It helps the fire safety designer to identify regions of poor visibility in a glance and will prove to be a very useful tool in performance based fire safety design

    High-Level GPU Programming: Domain-Specific Optimization and Inference

    Get PDF
    When writing computer software one is often forced to balance the need for high run-time performance with high programmer productivity. By using a high-level language it is often possible to cut development times, but this typically comes at the cost of reduced run-time performance. Using a lower-level language, programs can be made very efficient but at the cost of increased development time. Real-time computer graphics is an area where there are very high demands on both performance and visual quality. Typically, large portions of such applications are written in lower-level languages and also rely on dedicated hardware, in the form of programmable graphics processing units (GPUs), for handling computationally demanding rendering algorithms. These GPUs are parallel stream processors, specialized towards computer graphics, that have computational performance more than a magnitude higher than corresponding CPUs. This has revolutionized computer graphics and also led to GPUs being used to solve more general numerical problems, such as fluid and physics simulation, protein folding, image processing, and databases. Unfortunately, the highly specialized nature of GPUs has also made them difficult to program. In this dissertation we show that GPUs can be programmed at a higher level, while maintaining performance, compared to current lower-level languages. By constructing a domain-specific language (DSL), which provides appropriate domain-specific abstractions and user-annotations, it is possible to write programs in a more abstract and modular manner. Using knowledge of the domain it is possible for the DSL compiler to generate very efficient code. We show that, by experiment, the performance of our DSLs is equal to that of GPU programs written by hand using current low-level languages. Also, control over the trade-offs between visual quality and performance is retained. In the papers included in this dissertation, we present domain-specific languages targeted at numerical processing and computer graphics, respectively. These DSL have been implemented as embedded languages in Python, a dynamic programming language that provide a rich set of high-level features. In this dissertation we show how these features can be used to facilitate the construction of embedded languages

    Efficient Methods for Computational Light Transport

    Get PDF
    En esta tesis presentamos contribuciones sobre distintos retos computacionales relacionados con transporte de luz. Los algoritmos que utilizan información sobre el transporte de luz están presentes en muchas aplicaciones de hoy en día, desde la generación de efectos visuales, a la detección de objetos en tiempo real. La luz es una valiosa fuente de información que nos permite entender y representar nuestro entorno, pero obtener y procesar esta información presenta muchos desafíos debido a la complejidad de las interacciones entre la luz y la materia. Esta tesis aporta contribuciones en este tema desde dos puntos de vista diferentes: algoritmos en estado estacionario, en los que se asume que la velocidad de la luz es infinita; y algoritmos en estado transitorio, que tratan la luz no solo en el dominio espacial, sino también en el temporal. Nuestras contribuciones en algoritmos estacionarios abordan problemas tanto en renderizado offline como en tiempo real. Nos enfocamos en la reducción de varianza para métodos offline,proponiendo un nuevo método para renderizado eficiente de medios participativos. En renderizado en tiempo real, abordamos las limitacionesde consumo de batería en dispositivos móviles proponiendo un sistema de renderizado que incrementa la eficiencia energética en aplicaciones gráficas en tiempo real. En el transporte de luz transitorio, formalizamos la simulación de este tipo transporte en este nuevo dominio, y presentamos nuevos algoritmos y métodos para muestreo eficiente para render transitorio. Finalmente, demostramos la utilidad de generar datos en este dominio, presentando un nuevo método para corregir interferencia multi-caminos en camaras Timeof- Flight, un problema patológico en el procesamiento de imágenes transitorias.n this thesis we present contributions to different challenges of computational light transport. Light transport algorithms are present in many modern applications, from image generation for visual effects to real-time object detection. Light is a rich source of information that allows us to understand and represent our surroundings, but obtaining and processing this information presents many challenges due to its complex interactions with matter. This thesis provides advances in this subject from two different perspectives: steady-state algorithms, where the speed of light is assumed infinite, and transient-state algorithms, which deal with light as it travels not only through space but also time. Our steady-state contributions address problems in both offline and real-time rendering. We target variance reduction in offline rendering by proposing a new efficient method for participating media rendering. In real-time rendering, we target energy constraints of mobile devices by proposing a power-efficient rendering framework for real-time graphics applications. In transient-state we first formalize light transport simulation under this domain, and present new efficient sampling methods and algorithms for transient rendering. We finally demonstrate the potential of simulated data to correct multipath interference in Time-of-Flight cameras, one of the pathological problems in transient imaging.<br /

    A graphics processing unit based method for dynamic real-time global illumination

    Get PDF
    Real-time realistic image synthesis for virtual environments has been one of the most actively researched areas in computer graphics for over a decade. Images that display physically correct illumination of an environment can be simulated by evaluating a multi-dimensional integral equation, called the rendering equation, over the surfaces of the environment. Many global illumination algorithms such as pathtracing, photon mapping and distributed ray-tracing can produce realistic images but are generally unable to cope with dynamic lighting and objects at interactive rates. It still remains one of most challenging problems to simulate physically correctly illuminated dynamic environments without a substantial preprocessing step. In this thesis we present a rendering system for dynamic environments by implementing a customized rasterizer for global illumination entirely on the graphics hardware, the Graphical Processing Unit. Our research focuses on a parameterization of discrete visibility field for efficient indirect illumination computation. In order to generate the visibility field, we propose a CUDA-based (Compute Unified Device Architecture) rasterizer which builds Layered Hit Buffers (LHB) by rasterizing polygons into multi-layered structural buffers in parallel. The LHB provides a fast visibility function for any direction at any point. We propose a cone approximation solution to resolve an aliasing problem due to limited directional discretization. We also demonstrate how to remove structure noises by adapting an interleaved sampling scheme and discontinuity buffer. We show that a gathering method amortized with a multi-level Quasi Mont Carlo method can evaluate the rendering equation in real-time. The method can realize real-time walk-through of a complex virtual environment that has a mixture of diffuse and glossy reflection, computing multiple indirect bounces on the fly. We show that our method is capable of simulating fully dynamic environments including changes of view, materials, lighting and objects at interactive rates on commodity level graphics hardware

    Imposters for particle-based datasets

    Get PDF
    Many particle-based datasets produced by molecular dynamics simulations consist of millions of particles. Current visualization techniques are incapable of representing those large-scale datasets consistently across all scales. They either produce overlysmooth representation or are prone to aliasing due to under-sampling. This work introduces a technique that captures the micro-scale surface features accurately and is able to represent the complex local illumination behavior of hundreds of particles in a single pixel-footprint. This scale-consistent technique allows for an overview that is resistant to aliasing and true to the micro-scale surface. This technique produces visualization of shock waves that could not be seen before without dedicated visualization methods. It can be applied to any opaque particle glyphs and BRDF model.Viele partikel-basierte Datensätze bestehen aus mehreren Millionen von Partikeln. Diese stammen meist aus Molekulardynamic-Simulationen. Aktuelle Visualisierungstechniken sind nicht in der Lage Datensätze dieser Größe konsistent über alle Skalen hinweg zu repräsentieren. Sie produzieren entweder stark vereinfachte Darstellungen oder sind anfällig für Aliasing, aufgrund zu niedriger Abtastraten. Diese Arbeit stellt eine neue Visualisierungstechnik vor, die die Partikeloberflächen konsistent und mit hoher Genauigkeit repräsentieren. Diese Technik basiert auf normal distribution functions und impostern. Mit ihr konnten Visualisierungen von Schockwellen in Aluminiumgittern produziert werden, die mit aktuellen Visualisierungstechniken nicht erreichbar waren
    corecore