170 research outputs found

    A constructive theory of sampling for image synthesis using reproducing kernel bases

    Get PDF
    Sampling a scene by tracing rays and reconstructing an image from such pointwise samples is fundamental to computer graphics. To improve the efficacy of these computations, we propose an alternative theory of sampling. In contrast to traditional formulations for image synthesis, which appeal to nonconstructive Dirac deltas, our theory employs constructive reproducing kernels for the correspondence between continuous functions and pointwise samples. Conceptually, this allows us to obtain a common mathematical formulation of almost all existing numerical techniques for image synthesis. Practically, it enables novel sampling based numerical techniques designed for light transport that provide considerably improved performance per sample. We exemplify the practical benefits of our formulation with three applications: pointwise transport of color spectra, projection of the light energy density into spherical harmonics, and approximation of the shading equation from a photon map. Experimental results verify the utility of our sampling formulation, with lower numerical error rates and enhanced visual quality compared to existing techniques

    Applied Visualization in the Neurosciences and the Enhancement of Visualization through Computer Graphics

    Get PDF
    The complexity and size of measured and simulated data in many fields of science is increasing constantly. The technical evolution allows for capturing smaller features and more complex structures in the data. To make this data accessible by the scientists, efficient and specialized visualization techniques are required. Maximum efficiency and value for the user can only be achieved by adapting visualization to the specific application area and the specific requirements of the scientific field. Part I: In the first part of my work, I address the visualization in the neurosciences. The neuroscience tries to understand the human brain; beginning at its smallest parts, up to its global infrastructure. To achieve this ambitious goal, the neuroscience uses a combination of three-dimensional data from a myriad of sources, like MRI, CT, or functional MRI. To handle this diversity of different data types and sources, the neuroscience need specialized and well evaluated visualization techniques. As a start, I will introduce an extensive software called \"OpenWalnut\". It forms the common base for developing and using visualization techniques with our neuroscientific collaborators. Using OpenWalnut, standard and novel visualization approaches are available to the neuroscientific researchers too. Afterwards, I am introducing a very specialized method to illustrate the causal relation of brain areas, which was, prior to that, only representable via abstract graph models. I will finalize the first part of my work with an evaluation of several standard visualization techniques in the context of simulated electrical fields in the brain. The goal of this evaluation was clarify the advantages and disadvantages of the used visualization techniques to the neuroscientific community. We exemplified these, using clinically relevant scenarios. Part II: Besides the data preprocessing, which plays a tremendous role in visualization, the final graphical representation of the data is essential to understand structure and features in the data. The graphical representation of data can be seen as the interface between the data and the human mind. The second part of my work is focused on the improvement of structural and spatial perception of visualization -- the improvement of the interface. Unfortunately, visual improvements using computer graphics methods of the computer game industry is often seen sceptically. In the second part, I will show that such methods can be applied to existing visualization techniques to improve spatiality and to emphasize structural details in the data. I will use a computer graphics paradigm called \"screen space rendering\". Its advantage, amongst others, is its seamless applicability to nearly every visualization technique. I will start with two methods that improve the perception of mesh-like structures on arbitrary surfaces. Those mesh structures represent second-order tensors and are generated by a method named \"TensorMesh\". Afterwards I show a novel approach to optimally shade line and point data renderings. With this technique it is possible for the first time to emphasize local details and global, spatial relations in dense line and point data.In vielen Bereichen der Wissenschaft nimmt die Größe und Komplexität von gemessenen und simulierten Daten zu. Die technische Entwicklung erlaubt das Erfassen immer kleinerer Strukturen und komplexerer Sachverhalte. Um solche Daten dem Menschen zugänglich zu machen, benötigt man effiziente und spezialisierte Visualisierungswerkzeuge. Nur die Anpassung der Visualisierung auf ein Anwendungsgebiet und dessen Anforderungen erlaubt maximale Effizienz und Nutzen für den Anwender. Teil I: Im ersten Teil meiner Arbeit befasse ich mich mit der Visualisierung im Bereich der Neurowissenschaften. Ihr Ziel ist es, das menschliche Gehirn zu begreifen; von seinen kleinsten Teilen bis hin zu seiner Gesamtstruktur. Um dieses ehrgeizige Ziel zu erreichen nutzt die Neurowissenschaft vor allem kombinierte, dreidimensionale Daten aus vielzähligen Quellen, wie MRT, CT oder funktionalem MRT. Um mit dieser Vielfalt umgehen zu können, benötigt man in der Neurowissenschaft vor allem spezialisierte und evaluierte Visualisierungsmethoden. Zunächst stelle ich ein umfangreiches Softwareprojekt namens \"OpenWalnut\" vor. Es bildet die gemeinsame Basis für die Entwicklung und Nutzung von Visualisierungstechniken mit unseren neurowissenschaftlichen Kollaborationspartnern. Auf dieser Basis sind klassische und neu entwickelte Visualisierungen auch für Neurowissenschaftler zugänglich. Anschließend stelle ich ein spezialisiertes Visualisierungsverfahren vor, welches es ermöglicht, den kausalen Zusammenhang zwischen Gehirnarealen zu illustrieren. Das war vorher nur durch abstrakte Graphenmodelle möglich. Den ersten Teil der Arbeit schließe ich mit einer Evaluation verschiedener Standardmethoden unter dem Blickwinkel simulierter elektrischer Felder im Gehirn ab. Das Ziel dieser Evaluation war es, der neurowissenschaftlichen Gemeinde die Vor- und Nachteile bestimmter Techniken zu verdeutlichen und anhand klinisch relevanter Fälle zu erläutern. Teil II: Neben der eigentlichen Datenvorverarbeitung, welche in der Visualisierung eine enorme Rolle spielt, ist die grafische Darstellung essenziell für das Verständnis der Strukturen und Bestandteile in den Daten. Die grafische Repräsentation von Daten bildet die Schnittstelle zum Gehirn des Menschen. Der zweite Teile meiner Arbeit befasst sich mit der Verbesserung der strukturellen und räumlichen Wahrnehmung in Visualisierungsverfahren -- mit der Verbesserung der Schnittstelle. Leider werden viele visuelle Verbesserungen durch Computergrafikmethoden der Spieleindustrie mit Argwohn beäugt. Im zweiten Teil meiner Arbeit werde ich zeigen, dass solche Methoden in der Visualisierung angewendet werden können um den räumlichen Eindruck zu verbessern und Strukturen in den Daten hervorzuheben. Dazu nutze ich ein in der Computergrafik bekanntes Paradigma: das \"Screen Space Rendering\". Dieses Paradigma hat den Vorteil, dass es auf nahezu jede existierende Visualiserungsmethode als Nachbearbeitunsgschritt angewendet werden kann. Zunächst führe ich zwei Methoden ein, die die Wahrnehmung von gitterartigen Strukturen auf beliebigen Oberflächen verbessern. Diese Gitter repräsentieren die Struktur von Tensoren zweiter Ordnung und wurden durch eine Methode namens \"TensorMesh\" erzeugt. Anschließend zeige ich eine neuartige Technik für die optimale Schattierung von Linien und Punktdaten. Mit dieser Technik ist es erstmals möglich sowohl lokale Details als auch globale räumliche Zusammenhänge in dichten Linien- und Punktdaten zu erfassen

    Ray Tracing Complex Scenes on a Multiple-instruction Steam Multiple-Data Stream Concurrent Computer

    Get PDF
    The Ray Tracing technique generates perhaps the most realistic looking computergenerated images. It does so at the cost of a great deal of computer time. Many algorithms have been developed to speed up the ray tracing procedure, but it still remains the most CPU-intensive realistic image synthesis method. To date, ray tracing has remained largely in the realm of serial computers. The research in this thesis takes ray tracing strongly into the parallel computing domain and deals effectively with all of the central issuessurrounding the parallelization of this procedure. Results from the "Hypercube Ray Tracer" are collected and compared against other ray tracing systems. A new technique for ray tracing Constructive Solid Geometry objects is also developed and implemented.Electrical Engineerin

    Hierarchical Variance Reduction Techniques for Monte Carlo Rendering

    Get PDF
    Ever since the first three-dimensional computer graphics appeared half a century ago, the goal has been to model and simulate how light interacts with materials and objects to form an image. The ultimate goal is photorealistic rendering, where the created images reach a level of accuracy that makes them indistinguishable from photographs of the real world. There are many applications ñ visualization of products and architectural designs yet to be built, special effects, computer-generated films, virtual reality, and video games, to name a few. However, the problem has proven tremendously complex; the illumination at any point is described by a recursive integral to which a closed-form solution seldom exists. Instead, computer simulation and Monte Carlo methods are commonly used to statistically estimate the result. This introduces undesirable noise, or variance, and a large body of research has been devoted to finding ways to reduce the variance. I continue along this line of research, and present several novel techniques for variance reduction in Monte Carlo rendering, as well as a few related tools. The research in this dissertation focuses on using importance sampling to pick a small set of well-distributed point samples. As the primary contribution, I have developed the first methods to explicitly draw samples from the product of distant high-frequency lighting and complex reflectance functions. By sampling the product, low noise results can be achieved using a very small number of samples, which is important to minimize the rendering times. Several different hierarchical representations are explored to allow efficient product sampling. In the first publication, the key idea is to work in a compressed wavelet basis, which allows fast evaluation of the product. Many of the initial restrictions of this technique were removed in follow-up work, allowing higher-resolution uncompressed lighting and avoiding precomputation of reflectance functions. My second main contribution is to present one of the first techniques to take the triple product of lighting, visibility and reflectance into account to further reduce the variance in Monte Carlo rendering. For this purpose, control variates are combined with importance sampling to solve the problem in a novel way. A large part of the technique also focuses on analysis and approximation of the visibility function. To further refine the above techniques, several useful tools are introduced. These include a fast, low-distortion map to represent (hemi)spherical functions, a method to create high-quality quasi-random points, and an optimizing compiler for analyzing shaders using interval arithmetic. The latter automatically extracts bounds for importance sampling of arbitrary shaders, as opposed to using a priori known reflectance functions. In summary, the work presented here takes the field of computer graphics one step further towards making photorealistic rendering practical for a wide range of uses. By introducing several novel Monte Carlo methods, more sophisticated lighting and materials can be used without increasing the computation times. The research is aimed at domain-specific solutions to the rendering problem, but I believe that much of the new theory is applicable in other parts of computer graphics, as well as in other fields

    Intuitive and Accurate Material Appearance Design and Editing

    Get PDF
    Creating and editing high-quality materials for photorealistic rendering can be a difficult task due to the diversity and complexity of material appearance. Material design is the process by which artists specify the reflectance properties of a surface, such as its diffuse color and specular roughness. Even with the support of commercial software packages, material design can be a time-consuming trial-and-error task due to the counter-intuitive nature of the complex reflectance models. Moreover, many material design tasks require the physical realization of virtually designed materials as the final step, which makes the process even more challenging due to rendering artifacts and the limitations of fabrication. In this dissertation, we propose a series of studies and novel techniques to improve the intuitiveness and accuracy of material design and editing. Our goal is to understand how humans visually perceive materials, simplify user interaction in the design process and, and improve the accuracy of the physical fabrication of designs. Our first work focuses on understanding the perceptual dimensions for measured material data. We build a perceptual space based on a low-dimensional reflectance manifold that is computed from crowd-sourced data using a multi-dimensional scaling model. Our analysis shows the proposed perceptual space is consistent with the physical interpretation of the measured data. We also put forward a new material editing interface that takes advantage of the proposed perceptual space. We visualize each dimension of the manifold to help users understand how it changes the material appearance. Our second work investigates the relationship between translucency and glossiness in material perception. We conduct two human subject studies to test if subsurface scattering impacts gloss perception and examine how the shape of an object influences this perception. Based on our results, we discuss why it is necessary to include transparent and translucent media for future research in gloss perception and material design. Our third work addresses user interaction in the material design system. We present a novel Augmented Reality (AR) material design prototype, which allows users to visualize their designs against a real environment and lighting. We believe introducing AR technology can make the design process more intuitive and improve the authenticity of the results for both novice and experienced users. To test this assumption, we conduct a user study to compare our prototype with the traditional material design system with gray-scale background and synthetic lighting. The results demonstrate that with the help of AR techniques, users perform better in terms of objectively measured accuracy and time and they are subjectively more satisfied with their results. Finally, our last work turns to the challenge presented by the physical realization of designed materials. We propose a learning-based solution to map the virtually designed appearance to a meso-scale geometry that can be easily fabricated. Essentially, this is a fitting problem, but compared with previous solutions, our method can provide the fabrication recipe with higher reconstruction accuracy for a large fitting gamut. We demonstrate the efficacy of our solution by comparing the reconstructions with existing solutions and comparing fabrication results with the original design. We also provide an application of bi-scale material editing using the proposed method

    Computer Graphics Learning Materials

    Get PDF
    Selles lõputöös on antud ülevaade Tartu Ülikooli aine Arvutigraafika (MTAT.03.015) jaoks koostatud õppematerjalist ja õppekeskkonnast. Kirjeldatud on aine modulaarset ülesehitust, mis rakendab kombineeritud ülevalt-alla (ing. k. top-down) ja alt-üles (ing. k. bottom-up) lähenemisi. Loodud õppematerjal sisaldab endas interaktiivseid näiteid, mis vastavad hõivatuse taksonoomia 4ndale tasemele. Õppekeskkonna CGLearn spetsifikatsioon ja implementatsiooni detailid on kirjeldatud. Töö lõpus on kursusel osalenud õpilaste hulgas läbi viidud tagasiside küsitluse tulemuste analüüsiga. Lisa fail on lingina kätesaadav serveri probleemide tõttu aadresil : http://comserv.cs.ut.ee/forms/ati_report/files/ComputerGraphicsLearningMaterialsAppendix.zipThis thesis provides an overview of the learning material and a custom learning environment created for the Computer Graphics (MTAT.03.015) course in the University of Tartu. It describes a modular layout, that mixes a top-down and bottom-up approaches, in which the course was organized. The created material also includes interactive examples that satisfy engagement level 4 requirements. The specification and implementation details of the custom learning environment called CGLearn are given. Thesis concludes with the analysis of the feedback questionnaire answered by the students participating in the course and using the material. Due to server problems extras file is in here : http://comserv.cs.ut.ee/forms/ati_report/files/ComputerGraphicsLearningMaterialsAppendix.zi

    NaRPA: Navigation and Rendering Pipeline for Astronautics

    Full text link
    This paper presents Navigation and Rendering Pipeline for Astronautics (NaRPA) - a novel ray-tracing-based computer graphics engine to model and simulate light transport for space-borne imaging. NaRPA incorporates lighting models with attention to atmospheric and shading effects for the synthesis of space-to-space and ground-to-space virtual observations. In addition to image rendering, the engine also possesses point cloud, depth, and contour map generation capabilities to simulate passive and active vision-based sensors and to facilitate the designing, testing, or verification of visual navigation algorithms. Physically based rendering capabilities of NaRPA and the efficacy of the proposed rendering algorithm are demonstrated using applications in representative space-based environments. A key demonstration includes NaRPA as a tool for generating stereo imagery and application in 3D coordinate estimation using triangulation. Another prominent application of NaRPA includes a novel differentiable rendering approach for image-based attitude estimation is proposed to highlight the efficacy of the NaRPA engine for simulating vision-based navigation and guidance operations.Comment: 49 pages, 22 figure

    Appearance Preserving Rendering of Out-of-Core Polygon and NURBS Models

    Get PDF
    In Computer Aided Design (CAD) trimmed NURBS surfaces are widely used due to their flexibility. For rendering and simulation however, piecewise linear representations of these objects are required. A relatively new field in CAD is the analysis of long-term strain tests. After such a test the object is scanned with a 3d laser scanner for further processing on a PC. In all these areas of CAD the number of primitives as well as their complexity has grown constantly in the recent years. This growth is exceeding the increase of processor speed and memory size by far and posing the need for fast out-of-core algorithms. This thesis describes a processing pipeline from the input data in the form of triangular or trimmed NURBS models until the interactive rendering of these models at high visual quality. After discussing the motivation for this work and introducing basic concepts on complex polygon and NURBS models, the second part of this thesis starts with a review of existing simplification and tessellation algorithms. Additionally, an improved stitching algorithm to generate a consistent model after tessellation of a trimmed NURBS model is presented. Since surfaces need to be modified interactively during the design phase, a novel trimmed NURBS rendering algorithm is presented. This algorithm removes the bottleneck of generating and transmitting a new tessellation to the graphics card after each modification of a surface by evaluating and trimming the surface on the GPU. To achieve high visual quality, the appearance of a surface can be preserved using texture mapping. Therefore, a texture mapping algorithm for trimmed NURBS surfaces is presented. To reduce the memory requirements for the textures, the algorithm is modified to generate compressed normal maps to preserve the shading of the original surface. Since texturing is only possible, when a parametric mapping of the surface - requiring additional memory - is available, a new simplification and tessellation error measure is introduced that preserves the appearance of the original surface by controlling the deviation of normal vectors. The preservation of normals and possibly other surface attributes allows interactive visualization for quality control applications (e.g. isophotes and reflection lines). In the last part out-of-core techniques for processing and rendering of gigabyte-sized polygonal and trimmed NURBS models are presented. Then the modifications necessary to support streaming of simplified geometry from a central server are discussed and finally and LOD selection algorithm to support interactive rendering of hard and soft shadows is described

    Real Time Graphics Rendering Capabilities of Modern Game Engines

    Get PDF
    Το αντικείμενο της πτυχιακής εργασίας αφορά στις πιο διαδεδομένες σύγχρονες μηχανές παιχνιδιών καθώς και στις δυνατότητές τους σε ό,τι αφορά την απόδοση τρισδιάστατων γραφικών σε πραγματικό χρόνο. Παρά τις προόδους στην τεχνολογία απόδοσης τρισδιάστατων γραφικών, οι σύγχρονες μηχανές παιχνιδιών καλούνται ακόμα να αντιμετωπίσουν την πρόκληση του να παρέχουν υψηλής πιστότητας εικαστικό αποτέλεσμα σε όσο το δυνατόν λιγότερο χρόνο. Στα πλαίσια αυτής της πτυχιακής, ερευνώ τις τρέχουσες δυνατότητες απόδοσης γραφικών πραγματικού χρόνου διαδεδομένων μηχανών παιχνιδιών, καθώ και τρόπους να επεκταθεί η σωλήνωση απόδοσης γραφικών μιας μηχανής, ώστε να ενσωματωθούν προσαρμοσμένες τεχνικές οι οποίες μπορούν να ενισχύσουν το εικαστικό αποτέλεσμα μιας σκηνής. Στα πρόσφατα χρόνια, έχει υπάρξει ραγδαία ανάπτυξη σε μεθόδους απόδοσης γραφικών και οπτικοποίησης και, μαζί με την ανάπτυξη των υπολογιστικών συστημάτων, αυτοί οι μέθοδοι συμβάλλουν σημαντικά στην απεικόνιση του πραγματικού κόσμου με μεγάλη πιστότητα. Ωστόσο, σε ό,τι αφορά την απόδοση γραφικών πραγματικού χρόνου, τα αποτελέσματα μπορεί να διαφέρουν, καθώς η διαδικασία της απόδοσης οφείλει να πραγματοποιείται σε σημαντικά λιγότερο χρόνο. Αυτή η πτυχιακή εργασία εξετάζει τον τρόπο που οι σύγχρονες μηχανές παιχνιδιών αντιμετωπίζουν αυτή τη πρόκληση αναλύοντας τις διαφορετικές πτυχές τους, συμπεριλαμβανομένου των δυνατοτήτων απόδοσης γραφικών, καθώς και διερευνώντας το ενδεχομένο να υλοποιήσει κανείς προσαρμοσμένους αλγόριθμους απόδοσης γραφικών, δοκιμάζοντας πιο συγκεκριμένα τις τεχνικές “screen space ambient occlusion” και “screen space directional occlusion”.The subject of this thesis pertains to the more popular modern game engines as well as their capabilities concerning the rendering of 3D graphics in real time. Despite the advances in 3D graphics rendering, modern game engines still need to face the challenge of providing high-fidelity visual results in as little time as possible. In this thesis, I investigate the current real time graphics rendering capabilities of popular game engines along with ways to extend the graphics pipeline of an engine to incorporate custom techniques that can enhance the visual fidelity of a scene. In recent years, there has been rapid development in offline rendering and visualization methods and, along with the development of computer systems, those methods manage to represent the real world with great accuracy. However, when it comes to real time rendering the results can vary, since the process of graphics rendering needs to occur in significantly less time. This thesis examines how modern game engines face this challenge by analyzing their different aspects, including their graphics rendering capabilities, as well as by investigating the ability to implement custom graphics rendering algorithms, in particular screen space ambient occlusion and screen space directional occlusion
    corecore