7,325 research outputs found

    GPU-Based Global Illumination Using Lightcuts

    Get PDF
    Global Illumination aims to generate high quality images. But due to its highrequirements, it is usually quite slow. Research documented in this thesis wasintended to offer a hardware and software combined acceleration solution toglobal illumination. The GPU (using CUDA) was the hardware part of the wholemethod that applied parallelism to increase performance; the “Lightcuts”algorithm proposed by Walter (2005) at SIGGRAPH 2005 acted as the softwaremethod. As the results demonstrated in this thesis, this combined method offersa satisfactory performance boost effect for relatively complex scenes

    Description of the VIP program for computing incident and absorbed energy at the surface of an orbiting satellite

    Get PDF
    VIP program for computing incident and absorbed energy at orbiting satellite surfac

    Path tracing using lower level of detail for secondary rays

    Get PDF
    Path tracing is a computationally expensive method of three dimensional rendering that aims to accurately simulate the propagation of light. A large amount of time is typically spent calculating intersections between rays and the scene, which is composed of triangular meshes stored in some form of bounding volume. This time can be reduced by lowering the overall number of triangles in the scene. Path tracing works by casting rays from the camera into the scene, reflecting until they hit a light source. Secondary rays, or rays which occur after the first intersection, usually contribute less to the overall image, yet require much more time to calculate than primary rays. This thesis found that significant performance gains can be made by using lower level of detail (LOD) triangular meshes for secondary rays. While the lower LOD models are less accurate, they still provide a good approximation of the mesh for secondary rays. Scenes with 1.4 million faces could be rendered up to 10% faster using a 1/32 ratio level of detail for secondary rays. A study with 14 subjects who ranked images based on image quality showed they were unable to differentiate between low LOD and full LOD images

    Average luminosity distance in inhomogeneous universes

    Full text link
    The paper studies the correction to the distance modulus induced by inhomogeneities and averaged over all directions from a given observer. The inhomogeneities are modeled as mass-compensated voids in random or regular lattices within Swiss-cheese universes. Void radii below 300 Mpc are considered, which are supported by current redshift surveys and limited by the recently observed imprint such voids leave on CMB. The averaging over all directions, performed by numerical ray tracing, is non-perturbative and includes the supernovas inside the voids. Voids aligning along a certain direction produce a cumulative gravitational lensing correction that increases with their number. Such corrections are destroyed by the averaging over all directions, even in non-randomized simple cubic void lattices. At low redshifts, the average correction is not zero but decays with the peculiar velocities and redshift. Its upper bound is provided by the maximal average correction which assumes no random cancelations between different voids. It is described well by a linear perturbation formula and, for the voids considered, is 20% of the correction corresponding to the maximal peculiar velocity. The average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximal one after a single void diameter. That is traced to cancellations between the corrections from the fronts and backs of different voids. All that implies that voids cannot imitate the effect of dark energy unless they have radii and peculiar velocities much larger than the currently observed. The results obtained allow one to readily predict the redshift above which the direction-averaged fluctuation in the Hubble diagram falls below a required precision and suggest a method to extract the background Hubble constant from low redshift data without the need to correct for peculiar velocities.Comment: 34 pages, 21 figures, matches the version accepted in JCA

    Approximating Signed Distance Field to a Mesh by Artificial Neural Networks

    Get PDF
    Previous research has resulted in many representations of surfaces for rendering. However, for some approaches, an accurate representation comes at the expense of large data storage. Considering that Artifcial Neural Networks (ANNs) have been shown to achieve good performance in approximating non-linear functions in recent years, the potential to apply them to the problem of surface representation needs to be investigated. The goal in this research is to exploring how ANNs can effciently learn the Signed Distance Field (SDF) representation of shapes. Specifcally, we investigate how well different architectures of ANNs can learn 2D SDFs, 3D SDFs, and SDFs approximating a complex triangle mesh. In this research, we performed three main experiments to determine which ANN architectures and confgurations are suitable for learning SDFs by analyzing the errors in training and testing as well as rendering results. Also, three different pipelines for rendering general SDFs, grid-based SDFs, and ANN based SDFs were implemented to show the resulting images on screen. The following data are measured in this research project: the errors in training different architectures of ANNs; the errors in rendering SDFs; comparison between grid-based SDFs and ANN based SDFs. This work demonstrates the use of using ANNs to approximate the SDF to a mesh by learning the dataset through training data sampled near the mesh surface, which could be a useful technique in 3D reconstruction and rendering. We have found that the size of trained neural network is also much smaller than either the triangle mesh or grid-based SDFs, which could be useful for compression applications, and in software or hardware that has a strict requirement of memory size

    Parallel hierarchical global illumination

    Get PDF
    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, we have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations

    Generating renderers

    Get PDF
    Most production renderers developed for the film industry are huge pieces of software that are able to render extremely complex scenes. Unfortunately, they are implemented using the currently available programming models that are not well suited to modern computing hardware like CPUs with vector units or GPUs. Thus, they have to deal with the added complexity of expressing parallelism and using hardware features in those models. Since compilers cannot alone optimize and generate efficient programs for any type of hardware, because of the large optimization spaces and the complexity of the underlying compiler problems, programmers have to rely on compiler-specific hardware intrinsics or write non-portable code. The consequence of these limitations is that programmers resort to writing the same code twice when they need to port their algorithm on a different architecture, and that the code itself becomes difficult to maintain, as algorithmic details are buried under hardware details. Thankfully, there are solutions to this problem, taking the form of Domain-Specific Lan- guages. As their name suggests, these languages are tailored for one domain, and compilers can therefore use domain-specific knowledge to optimize algorithms and choose the best execution policy for a given target hardware. In this thesis, we opt for another way of encoding domain- specific knowledge: We implement a generic, high-level, and declarative rendering and traversal library in a functional language, and later refine it for a target machine by providing partial evaluation annotations. The partial evaluator then specializes the entire renderer according to the available knowledge of the scene: Shaders are specialized when their inputs are known, and in general, all redundant computations are eliminated. Our results show that the generated renderers are faster and more portable than renderers written with state-of-the-art competing libraries, and that in comparison, our rendering library requires less implementation effort.Die meisten in der Filmindustrie zum Einsatz kommenden Renderer sind riesige Softwaresysteme, die in der Lage sind, extrem aufwendige Szenen zu rendern. Leider sind diese mit den aktuell verfügbaren Programmiermodellen implementiert, welche nicht gut geeignet sind für moderne Rechenhardware wie CPUs mit Vektoreinheiten oder GPUs. Deshalb müssen Entwickler sich mit der zusätzlichen Komplexität auseinandersetzen, Parallelismus und Hardwarefunktionen in diesen Programmiermodellen auszudrücken. Da Compiler nicht selbständig optimieren und effiziente Programme für jeglichen Typ Hardware generieren können, wegen des großen Optimierungsraumes und der Komplexität des unterliegenden Kompilierungsproblems, müssen Programmierer auf Compiler-spezifische Hardware-“Intrinsics” zurückgreifen, oder nicht portierbaren Code schreiben. Die Konsequenzen dieser Limitierungen sind, dass Programmierer darauf zurückgreifen den gleichen Code zweimal zu schreiben, wenn sie ihre Algorithmen für eine andere Architektur portieren müssen, und dass der Code selbst schwer zu warten wird, da algorithmische Details unter Hardwaredetails verloren gehen. Glücklicherweise gibt es Lösungen für dieses Problem, in der Form von DSLs. Diese Sprachen sind maßgeschneidert für eine Domäne und Compiler können deshalb Domänenspezifisches Wissen nutzen, um Algorithmen zu optimieren und die beste Ausführungsstrategie für eine gegebene Zielhardware zu wählen. In dieser Dissertation wählen wir einen anderen Weg, Domänenspezifisches Wissen zu enkodieren: Wir implementieren eine generische, high-level und deklarative Rendering- und Traversierungsbibliothek in einer funktionalen Programmiersprache, und verfeinern sie später für eine Zielmaschine durch Bereitstellung von Annotationen für die partielle Auswertung. Der “Partial Evaluator” spezialisiert dann den kompletten Renderer, basierend auf dem verfügbaren Wissen über die Szene: Shader werden spezialisiert, wenn ihre Eingaben bekannt sind, und generell werden alle redundanten Berechnungen eliminiert. Unsere Ergebnisse zeigen, dass die generierten Renderer schneller und portierbarer sind, als Renderer geschrieben mit den aktuellen Techniken konkurrierender Bibliotheken und dass, im Vergleich, unsere Rendering Bibliothek weniger Implementierungsaufwand erfordert.This work was supported by the Federal Ministry of Education and Research (BMBF) as part of the Metacca and ProThOS projects as well as by the Intel Visual Computing Institute (IVCI) and Cluster of Excellence on Multimodal Computing and Interaction (MMCI) at Saarland University. Parts of it were also co-funded by the European Union(EU), as part of the Dreamspace project
    corecore