70 research outputs found

    Técnicas de altas prestaciones para métodos de iluminación global

    Get PDF
    [Resumen] El gran interés en los métodos de iluminación global se debe a sus múltiples aplicaciones y al realismo de sus imágenes resultantes. La investigación presentada en esta memoria se centra en mejorar computacionalmente el algoritmo de radiosidad, planteando estrategias tanto para métodos determinísticos como estocásticos. Respecto de los métodos determinísticos, se expondrán nuestras implementaciones en un sistema distribuido del algoritmo de radiosidad progresiva, utilizando el paradigma de paso de mensajes. Estas implementaciones están basadas en la división de la escena de una manera uniforme o no uniforme. Además, se usa la técnica de las máscaras de visibilidad para el cálculo de visibilidad entre elementos de distintos subescenas. También se demuestra que estas metodologías pueden reducir el tiempo de ejecución secuencial. Relativo a las soluciones estocásticas, presentamos dos implementaciones del método de relajación estocástica de Monte Carlo para radiosidad: en un sistema distribuido y en una Graphics Processing Unit (GPU). La primera se basa en tres técnicas: partición de la escena, empaquetamiento de rayos y determinación distribuida del fin de iteración. En la implementación GPU, además de la partición de la escena se empleó la simplificación de la malla de elementos y una organización eficiente de la ejecución de las tareas.[Resumo] O grande interese nos métodos de iluminación global débese ás súas múltiples aplicacións e ao realismo das súas imaxes resultantes. A investigación presentada nesta memoria céntrase en mellorar computacionalmente o algoritmo de radiosidade, formulando estratexias tanto para métodos determinísticos como estocásticos. Respecto dos métodos determinísticos, exporanse as nosas implementacións nun sistema distribuído do algoritmo de radiosidade progresiva, utilizando o paradigma de paso de mensaxes. Estas implementacións están baseadas na división da escena dunha maneira uniforme ou non uniforme. Ademais, úsase a técnica das máscaras de visibilidade para o cálculo de visibilidade entre elementos de distintas subescenas. Tamén se demostra que estas metodoloxías poden reducir o tempo de execución secuencial. Relativo as solucións estocásticas, presentamos dúas implementacións do método de relaxación estocástica de Monte Carlo para radiosidade: nun sistema distribuído e nunha Graphics Processing Unit (GPU). A primeira baséase en tres técnicas: partición da escena, empaquetamento de raios e determinación distribuída do fin de iteración. Na implementación GPU, ademais da partición da escena empregouse a simplificación da malla de elementos e unha organización eficiente da execución das tarefas.[Abstract] The great interest in global illumination methods is due to their multiple applications and the realism of the resulting images. The research presented in the present thesis focuses on computationally improving the radiosity algorithm, proposing strategies for both deterministic and stochastic approaches. For deterministic approaches, our implementations of the progressive radiosity algorithm will be demonstrated in a distributed system , using the message passing paradigm. These implementations are based on the partitioning of the scene in a uniform or non uniform manner. Furthermore, the technique of visibility masks is employed to calculate the visibility between elements in different subscenes. It is also shown that these methods are capable of reducing the sequential execution time. With regard to stochastic solutions, we present two implementations of the stochastic relaxation method for Monte Carlo radiosity: in a distributed system and in a Graphics Processing Unit (GPU). The first is based on three techniques: partition of the scene, ray packing strategy and distributed testing of the end of each iteration. In the GPU implementation, as well as the partition of the scene a simplified mesh of the elements was used along with an efficient thread scheduling

    Generating Radiosity Maps on the GPU

    Get PDF
    Global illumination algorithms are used to render photorealistic images of 3D scenes taking into account both direct lighting from the light source and light reflected from other surfaces in the scene. Algorithms based on computing radiosity were among the first to be used to calculate indirect lighting, although they make assumptions that work only for diffusely reflecting surfaces. The classic radiosity approach divides a scene into multiple patches and generates a linear system of equations which, when solved, gives the values for the radiosity leaving each patch. This process can require extensive calculations and is therefore very slow. An alternative to solving a large system of equations is to use a Monte Carlo method of random sampling. In this approach, a large number of rays are shot from each patch into its surroundings and the irradiance values obtained from these rays are averaged to obtain a close approximation to the real value. This thesis proposes the use of a Monte Carlo method to generate radiosity texture maps on graphics hardware. By storing the radiosity values in textures, they are immediately available for rendering, making this algorithm useful for interactive implementations. We have built a framework to run this algorithm and using current graphics cards (NV6800 or higher) it is possible to execute it almost interactively for simple scenes and within relatively low times for more complex scenes

    Efficient Many-Light Rendering of Scenes with Participating Media

    Get PDF
    We present several approaches based on virtual lights that aim at capturing the light transport without compromising quality, and while preserving the elegance and efficiency of many-light rendering. By reformulating the integration scheme, we obtain two numerically efficient techniques; one tailored specifically for interactive, high-quality lighting on surfaces, and one for handling scenes with participating media

    Lichttransportsimulation auf Spezialhardware

    Get PDF
    It cannot be denied that the developments in computer hardware and in computer algorithms strongly influence each other, with new instructions added to help with video processing, encryption, and in many other areas. At the same time, the current cap on single threaded performance and wide availability of multi-threaded processors has increased the focus on parallel algorithms. Both influences are extremely prominent in computer graphics, where the gaming and movie industries always strive for the best possible performance on the current, as well as future, hardware. In this thesis we examine the hardware-algorithm synergies in the context of ray tracing and Monte-Carlo algorithms. First, we focus on the very basic element of all such algorithms - the casting of rays through a scene, and propose a dedicated hardware unit to accelerate this common operation. Then, we examine existing and novel implementations of many Monte-Carlo rendering algorithms on massively parallel hardware, as full hardware utilization is essential for peak performance. Lastly, we present an algorithm for tackling complex interreflections of glossy materials, which is designed to utilize both powerful processing units present in almost all current computers: the Centeral Processing Unit (CPU) and the Graphics Processing Unit (GPU). These three pieces combined show that it is always important to look at hardware-algorithm mapping on all levels of abstraction: instruction, processor, and machine.Zweifelsohne beeinflussen sich Computerhardware und Computeralgorithmen gegenseitig in ihrer Entwicklung: Prozessoren bekommen neue Instruktionen, um zum Beispiel Videoverarbeitung, Verschlüsselung oder andere Anwendungen zu beschleunigen. Gleichzeitig verstärkt sich der Fokus auf parallele Algorithmen, bedingt durch die limitierte Leistung von für einzelne Threads und die inzwischen breite Verfügbarkeit von multi-threaded Prozessoren. Beide Einflüsse sind im Grafikbereich besonders stark , wo es z.B. für die Spiele- und Filmindustrie wichtig ist, die bestmögliche Leistung zu erreichen, sowohl auf derzeitiger und zukünftiger Hardware. In Rahmen dieser Arbeit untersuchen wir die Synergie von Hardware und Algorithmen anhand von Ray-Tracing- und Monte-Carlo-Algorithmen. Zuerst betrachten wir einen grundlegenden Hardware-Bausteins für alle diese Algorithmen, die Strahlenverfolgung in einer Szene, und präsentieren eine spezielle Hardware-Einheit zur deren Beschleunigung. Anschließend untersuchen wir existierende und neue Implementierungen verschiedener MonteCarlo-Algorithmen auf massiv-paralleler Hardware, wobei die maximale Auslastung der Hardware im Fokus steht. Abschließend stellen wir dann einen Algorithmus zur Berechnung von komplexen Beleuchtungseffekten bei glänzenden Materialien vor, der versucht, die heute fast überall vorhandene Kombination aus Hauptprozessor (CPU) und Grafikprozessor (GPU) optimal auszunutzen. Zusammen zeigen diese drei Aspekte der Arbeit, wie wichtig es ist, Hardware und Algorithmen auf allen Ebenen gleichzeitig zu betrachten: Auf den Ebenen einzelner Instruktionen, eines Prozessors bzw. eines gesamten Systems

    The use of primitives in the calculation of radiative view factors

    Get PDF
    Compilations of radiative view factors (often in closed analytical form) are readily available in the open literature for commonly encountered geometries. For more complex three-dimensional (3D) scenarios, however, the effort required to solve the requisite multi-dimensional integrations needed to estimate a required view factor can be daunting to say the least. In such cases, a combination of finite element methods (where the geometry in question is sub-divided into a large number of uniform, often triangular, elements) and Monte Carlo Ray Tracing (MC-RT) has been developed, although frequently the software implementation is suitable only for a limited set of geometrical scenarios. Driven initially by a need to calculate the radiative heat transfer occurring within an operational fibre-drawing furnace, this research set out to examine options whereby MC-RT could be used to cost-effectively calculate any generic 3D radiative view factor using current vectorisation technologies

    Efficient Rendering of Scenes with Dynamic Lighting Using a Photons Queue and Incremental Update Algorithm

    Get PDF
    Photon mapping is a popular extension to the classic ray tracing algorithm in the field of realistic image synthesis. Moreover, it benefits from the massive parallelism computational power brought by recent developments in graphics processor hardwareand programming models. However rendering the scenes with dynamic lights stillgreatly limits the performance due to the re-construction at each rendered frame ofa kd-tree for the photons. We developed a novel approach based on the idea that storing the photons data along with the kd-tree leaf nodes data and implemented new incremental update scheme to improve the performance for dynamic lighting. The implementation is GPU-based and fully parallelized. A series of benchmarks with the prevalent existing GPU photon mapping technique is carried out to evaluate our approach. Our new technique is shown to be faster when handling scenes with dynamic lights than the existing technique while having the same image quality

    High-fidelity graphics using unconventional distributed rendering approaches

    Get PDF
    High-fidelity rendering requires a substantial amount of computational resources to accurately simulate lighting in virtual environments. While desktop computing, with the aid of modern graphics hardware, has shown promise in delivering realistic rendering at interactive rates, real-time rendering of moderately complex scenes is still unachievable on the majority of desktop machines and the vast plethora of mobile computing devices that have recently become commonplace. This work provides a wide range of computing devices with high-fidelity rendering capabilities via oft-unused distributed computing paradigms. It speeds up the rendering process on formerly capable devices and provides full functionality to incapable devices. Novel scheduling and rendering algorithms have been designed to best take advantage of the characteristics of these systems and demonstrate the efficacy of such distributed methods. The first is a novel system that provides multiple clients with parallel resources for rendering a single task, and adapts in real-time to the number of concurrent requests. The second is a distributed algorithm for the remote asynchronous computation of the indirect diffuse component, which is merged with locally-computed direct lighting for a full global illumination solution. The third is a method for precomputing indirect lighting information for dynamically-generated multi-user environments by using the aggregated resources of the clients themselves. The fourth is a novel peer-to-peer system for improving the rendering performance in multi-user environments through the sharing of computation results, propagated via a mechanism based on epidemiology. The results demonstrate that the boundaries of the distributed computing typically used for computer graphics can be significantly and successfully expanded by adapting alternative distributed methods

    Virtual light fields for global illumination in computer graphics

    Get PDF
    This thesis presents novel techniques for the generation and real-time rendering of globally illuminated environments with surfaces described by arbitrary materials. Real-time rendering of globally illuminated virtual environments has for a long time been an elusive goal. Many techniques have been developed which can compute still images with full global illumination and this is still an area of active flourishing research. Other techniques have only dealt with certain aspects of global illumination in order to speed up computation and thus rendering. These include radiosity, ray-tracing and hybrid methods. Radiosity due to its view independent nature can easily be rendered in real-time after pre-computing and storing the energy equilibrium. Ray-tracing however is view-dependent and requires substantial computational resources in order to run in real-time. Attempts at providing full global illumination at interactive rates include caching methods, fast rendering from photon maps, light fields, brute force ray-tracing and GPU accelerated methods. Currently, these methods either only apply to special cases, are incomplete exhibiting poor image quality and/or scale badly such that only modest scenes can be rendered in real-time with current hardware. The techniques developed in this thesis extend upon earlier research and provide a novel, comprehensive framework for storing global illumination in a data structure - the Virtual Light Field - that is suitable for real-time rendering. The techniques trade off rapid rendering for memory usage and precompute time. The main weaknesses of the VLF method are targeted in this thesis. It is the expensive pre-compute stage with best-case O(N^2) performance, where N is the number of faces, which make the light propagation unpractical for all but simple scenes. This is analysed and greatly superior alternatives are presented and evaluated in terms of efficiency and error. Several orders of magnitude improvement in computational efficiency is achieved over the original VLF method. A novel propagation algorithm running entirely on the Graphics Processing Unit (GPU) is presented. It is incremental in that it can resolve visibility along a set of parallel rays in O(N) time and can produce a virtual light field for a moderately complex scene (tens of thousands of faces), with complex illumination stored in millions of elements, in minutes and for simple scenes in seconds. It is approximate but gracefully converges to a correct solution; a linear increase in resolution results in a linear increase in computation time. Finally a GPU rendering technique is presented which can render from Virtual Light Fields at real-time frame rates in high resolution VR presentation devices such as the CAVETM

    Towards Fully Dynamic Surface Illumination in Real-Time Rendering using Acceleration Data Structures

    Get PDF
    The improvements in GPU hardware, including hardware-accelerated ray tracing, and the push for fully dynamic realistic-looking video games, has been driving more research in the use of ray tracing in real-time applications. The work described in this thesis covers multiple aspects such as optimisations, adapting existing offline methods to real-time constraints, and adding effects which were hard to simulate without the new hardware, all working towards a fully dynamic surface illumination rendering in real-time.Our first main area of research concerns photon-based techniques, commonly used to render caustics. As many photons can be required for a good coverage of the scene, an efficient approach for detecting which ones contribute to a pixel is essential. We improve that process by adapting and extending an existing acceleration data structure; if performance is paramount, we present an approximation which trades off some quality for a 2–3× improvement in rendering time. The tracing of all the photons, and especially when long paths are needed, had become the highest cost. As most paths do not change from frame to frame, we introduce a validation procedure allowing the reuse of as many as possible, even in the presence of dynamic lights and objects. Previous algorithms for associating pixels and photons do not robustly handle specular materials, so we designed an approach leveraging ray tracing hardware to allow for caustics to be visible in mirrors or behind transparent objects.Our second research focus switches from a light-based perspective to a camera-based one, to improve the picking of light sources when shading: photon-based techniques are wonderful for caustics, but not as efficient for direct lighting estimations. When a scene has thousands of lights, only a handful can be evaluated at any given pixel due to time constraints. Current selection methods in video games are fast but at the cost of introducing bias. By adapting an acceleration data structure from offline rendering that stochastically chooses a light source based on its importance, we provide unbiased direct lighting evaluation at about 30 fps. To support dynamic scenes, we organise it in a two-level system making it possible to only update the parts containing moving lights, and in a more efficient way.We worked on top of the new ray tracing hardware to handle lighting situations that previously proved too challenging, and presented optimisations relevant for future algorithms in that space. These contributions will help in reducing some artistic constraints while designing new virtual scenes for real-time applications
    corecore