2,188 research outputs found

    Reaaliaikaisten antialiasiontimenetelmien vertailu virtuaalilaseilla

    Get PDF
    Virtual reality and head-mounted devices have gained popularity in the past few years. Their increased field-of-view combined with a display that is near to the eyes have increased the importance of anti-aliasing i.e. softening of the visible jagged edges resulting from insufficient rendering resolution. In this thesis, elementary theory of real-time rendering, anti-aliasing and virtual reality is studied. Based on the theory and review of recent studies, multisample anti-aliasing (MSAA), fast-approximate anti-aliasing (FXAA) and temporal anti-aliasing (TAA) were implemented into a real-time deferred rendering engine and the different techniques were studied in both subjective image quality and objective performance measures. In the scope of this thesis, only each methods’ ability to prevent or lessen jagged edges and small flickering detailed geometries is examined. Performance was measured on two different machines; the FXAA implementation was found to be the fastest with 3% impact on performance and required the least memory, the TAA performance impact was 10-11% and 22% to 62% for MSAA was depending on the sample count. Each techniques’ ability to prevent or reduce aliasing was examined by measuring the visual quality and fatigue reported by participants. Each anti-aliasing method was presented in a 3D scene using Oculus Rift CV1. The results indicate that the 4xMSAA and 2xMSAA had clearly the best visual quality and made participants the least fatigued. FXAA appears visually not as good, but did not cause significant fatigue. TAA appeared slightly blurry for the most of the participants, and this caused them to experience more fatigue. This study emphasizes the need for understanding the human visual system when developing real-time graphics for virtual reality application.Virtuaalitodellisuus (VR) ja VR-lasit ovat yleistyneet viime vuosina. VR-lasien huomattavasti suuremman näkökentän sekä lähelle silmiä tulevan näytön vuoksi antialiasointi, eli reunojen pehmennystekniikoista, on tullut tärkeäksi. Diplomityössä tehdään kirjallisuuskatsaus reaaliaikarenderöinnin, antialiasoinnin sekä virtuaalitodellisuuden perusteisiin. Teoriaan sekä viimeaikaisiin tutkimuksiin perustuen kolme antialiasointimenetelmää fast-approximate (FXAA), temporaalinen (TAA) sekä moninäytteistys (MSAA) ovat valittu implementoitavaksi reaaliaikaohjelmistoon ja tarkemmin tutkittavaksi suorituskyvyn sekä subjektiivisesti testattavan visuaalisen laadun puolesta. Diplomityö keskittyy visuaalisessa laadussa tutkimaan vain eri menetelmien kykyä estää tai redusoida reunojen antialiasointia ja esimerkiksi pienien geometristen objektien yksityiskohtien välkkymistä. Suorituskyvyn mittauksissa FXAA oli menetelmistä nopein (3% menetys suorituskyvyssä), TAA 10-11% menetys suorituskyvyssä sekä MSAA hitain 22-62% suorituskyvyn menetyksellä. Subjektiivisen laadun testillä mitattiin kokemuksen laatua, joka koostui visuaalisen laadun sekä uupumuksen arvostelusta eri tapauksissa. Ärsykkeet eli eri antialiasointimenetelmät esitettiin reaaliaikaisessa 3D-ympäristössä, jota katsottiin Oculus Rift CV1 -virtuaalilaseilla. Tulosten mukaan neljän sekä kahden näytteen versiot MSAA:sta olivat selkesti visuaalisesti laadukkaimmat sekä aiheuttivat vähiten uupuneisuutta koehenkilöissä. FXAA havaittiin laadultaan hiekommaksi, mutta ei MSAA:ta enemmän uupumusta aiheuttavaski. TAA aiheutti selkeästi eniten uupumusta sekä oli laadullisesti huonoin liiallisen pehmeyden ja haamuefektin vuoksi. Tämä tutkimus painottaa ihmisen näköjärjestelmän ymmärrystä kehittäessä reaaliaikagrafiikkaa VR-ohjelmistoihin

    Komparatiivinen arviointi kiiltävien pintojen valaistustuloksista mallintilan valaistuksen ja ruuduntilan valaistuksen välillä

    Get PDF
    The field of computer graphics places a premium on obtaining an optimal balance between the fidelity of visual of representation and the performance of rendering. The level of fidelity for traditional shading techniques that operate in screen-space is generally related to the screen resolution and thus the number of pixels that we render. Special application areas, such as stereo rendering for virtual reality head-mounted displays, demand high output update rates and screen pixel resolutions which can then lead to significant performance penalties. This means that it would be beneficial to utilize a rendering technique which could be decoupled from the output update rate and resolution, without too severely affecting the achieved rendering quality. One technique capable of meeting this goal is that of performing a 3D model's surface shading in an object-specific space. In this thesis we have implemented such a shading method, with the lighting computations over a model's surface being done on a model-specific, uniquely parameterized texture map we call a light map. As the shading is computed per light map texel, the costs do not depend on the output resolution or update rate. Additionally, we utilize the texture sampling hardware built into the Graphics Processing Units ubiquitous in modern computing systems to gain high quality anti-aliasing on the shading results. The end result is a surface appearance that is expected to theoretically be close to those resulting from highly supersampled screen-space shading techniques. In addition to the object-space lighting technique, we also implemented a traditional screen-space version of our shading algorithm. Both of these techniques were used in a user study we organized to test against the theoretical expectation. The results from the study indicated that the object-space shaded images are perceptually close to identical compared to heavily supersampled screen-space images.Tietokonegrafiikan alalla optimaalisen tasapainon saavuttaminen kuvanlaadun ja laskentanopeuden välillä on keskeisessä asemassa. Perinteisillä, kuvaruuduntilassa toimivilla valaistusalgoritmeilla kuvanlaatu on tyypillisesti riippuvainen käytetyn piirtoikkunan erottelutarkkuudesta ja näin ollen kuvaelementtien kokonaismäärästä. Tietyt sovellusalueet, kuten stereopiirtäminen keinotodellisuussovelluksille, edellyttävät korkeata ruudunpäivitystaajuutta sekä erottelutarkkuutta, mikä taas johtaa laskentatehovaatimusten kasvuun. Näin ollen on tarkoituksenmukaista hyödyntää algoritmeja, joissa valaistuslaskenta saataisiin erotettua näistä ominaisuuksista ilman merkittävää kuvanlaadun heikkenemistä. Yksi algoritmikategoria, joka täyttää nämä asetetut vaatimukset on valaistuslaskenta 3D-mallikohtaisessa tilassa. Tämän diplomityön puitteissa olemme toteuttaneet tähän kategoriaan lukeutuvan valaistusalgoritmin, jossa valaistuslaskenta suoritetaan mallikohtaisella, yksikäsitteisesti parametrisoidulla tekstuurikartalla. Tämä tarkoittaa, että valaistuslaskennasta koituvat suorituskykykustannukset eivät ole riippuvaisia aiemmin mainituista ruudun ominaisuuksista. Valaistuslaskenta yksilöllisiin tekstuurikarttoihin mahdollistaa näytönohjaimiin sisäänrakennetun teksturointilaitteiston käyttämisen korkealaatuiseen valaistustulosten suodattamiseen. Lopputuloksena saavutetaan piirretty kuva, jonka teoreettisesti oletetaan olevan laadultaan lähellä merkittävästi ylinäytteistettyä ruuduntilan valaistusalgoritmeille saavutettuja tuloksia. Mallikohtaisen tilan valaistusalgoritmin lisäksi toteutimme perinteisen ruuduntilan valaistusalgoritmiversion. Molempia toteutuksia käytettiin järjestämässämme käyttäjätestissä, jonka tavoitteena oli testata toteutuuko mainittu teoreettinen oletus käytännössä. Käyttäjätestin tulokset viittasivat vahvasti oletuksen pätevyyteen, käyttäjien kokonaisvaltaisesti kokien ylinäytteistetyn perinteisen valaistuslaskennan tulokset lähes identtisiksi mallintilan valaistuslaskennan tuloksiin

    Optimization techniques for computationally expensive rendering algorithms

    Get PDF
    Realistic rendering in computer graphics simulates the interactions of light and surfaces. While many accurate models for surface reflection and lighting, including solid surfaces and participating media have been described; most of them rely on intensive computation. Common practices such as adding constraints and assumptions can increase performance. However, they may compromise the quality of the resulting images or the variety of phenomena that can be accurately represented. In this thesis, we will focus on rendering methods that require high amounts of computational resources. Our intention is to consider several conceptually different approaches capable of reducing these requirements with only limited implications in the quality of the results. The first part of this work will study rendering of time-­¿varying participating media. Examples of this type of matter are smoke, optically thick gases and any material that, unlike the vacuum, scatters and absorbs the light that travels through it. We will focus on a subset of algorithms that approximate realistic illumination using images of real world scenes. Starting from the traditional ray marching algorithm, we will suggest and implement different optimizations that will allow performing the computation at interactive frame rates. This thesis will also analyze two different aspects of the generation of anti-­¿aliased images. One targeted to the rendering of screen-­¿space anti-­¿aliased images and the reduction of the artifacts generated in rasterized lines and edges. We expect to describe an implementation that, working as a post process, it is efficient enough to be added to existing rendering pipelines with reduced performance impact. A third method will take advantage of the limitations of the human visual system (HVS) to reduce the resources required to render temporally antialiased images. While film and digital cameras naturally produce motion blur, rendering pipelines need to explicitly simulate it. This process is known to be one of the most important burdens for every rendering pipeline. Motivated by this, we plan to run a series of psychophysical experiments targeted at identifying groups of motion-­¿blurred images that are perceptually equivalent. A possible outcome is the proposal of criteria that may lead to reductions of the rendering budgets

    Hardware-accelerated interactive data visualization for neuroscience in Python.

    Get PDF
    Large datasets are becoming more and more common in science, particularly in neuroscience where experimental techniques are rapidly evolving. Obtaining interpretable results from raw data can sometimes be done automatically; however, there are numerous situations where there is a need, at all processing stages, to visualize the data in an interactive way. This enables the scientist to gain intuition, discover unexpected patterns, and find guidance about subsequent analysis steps. Existing visualization tools mostly focus on static publication-quality figures and do not support interactive visualization of large datasets. While working on Python software for visualization of neurophysiological data, we developed techniques to leverage the computational power of modern graphics cards for high-performance interactive data visualization. We were able to achieve very high performance despite the interpreted and dynamic nature of Python, by using state-of-the-art, fast libraries such as NumPy, PyOpenGL, and PyTables. We present applications of these methods to visualization of neurophysiological data. We believe our tools will be useful in a broad range of domains, in neuroscience and beyond, where there is an increasing need for scalable and fast interactive visualization

    Design Guidelines for Agent Based Model Visualization

    Get PDF
    In the field of agent-based modeling (ABM), visualizations play an important role in identifying, communicating and understanding important behavior of the modeled phenomenon. However, many modelers tend to create ineffective visualizations of Agent Based Models (ABM) due to lack of experience with visual design. This paper provides ABM visualization design guidelines in order to improve visual design with ABM toolkits. These guidelines will assist the modeler in creating clear and understandable ABM visualizations. We begin by introducing a non-hierarchical categorization of ABM visualizations. This categorization serves as a starting point in the creation of an ABM visualization. We go on to present well-known design techniques in the context of ABM visualization. These techniques are based on Gestalt psychology, semiology of graphics, and scientific visualization. They improve the visualization design by facilitating specific tasks, and providing a common language to critique visualizations through the use of visual variables. Subsequently, we discuss the application of these design techniques to simplify, emphasize and explain an ABM visualization. Finally, we illustrate these guidelines using a simple redesign of a NetLogo ABM visualization. These guidelines can be used to inform the development of design tools that assist users in the creation of ABM visualizations.Visualization, Design, Graphics, Guidelines, Communication, Agent-Based Modeling

    Temporal issues of animate response

    Get PDF

    Gigavoxels: ray-guided streaming for efficient and detailed voxel rendering

    Get PDF
    Figure 1: Images show volume data that consist of billions of voxels rendered with our dynamic sparse octree approach. Our algorithm achieves real-time to interactive rates on volumes exceeding the GPU memory capacities by far, tanks to an efficient streaming based on a ray-casting solution. Basically, the volume is only used at the resolution that is needed to produce the final image. Besides the gain in memory and speed, our rendering is inherently anti-aliased. We propose a new approach to efficiently render large volumetric data sets. The system achieves interactive to real-time rendering performance for several billion voxels. Our solution is based on an adaptive data representation depending on the current view and occlusion information, coupled to an efficient ray-casting rendering algorithm. One key element of our method is to guide data production and streaming directly based on information extracted during rendering. Our data structure exploits the fact that in CG scenes, details are often concentrated on the interface between free space and clusters of density and shows that volumetric models might become a valuable alternative as a rendering primitive for real-time applications. In this spirit, we allow a quality/performance trade-off and exploit temporal coherence. We also introduce a mipmapping-like process that allows for an increased display rate and better quality through high quality filtering. To further enrich the data set, we create additional details through a variety of procedural methods. We demonstrate our approach in several scenarios, like the exploration of a 3D scan (8192 3 resolution), of hypertextured meshes (16384 3 virtual resolution), or of a fractal (theoretically infinite resolution). All examples are rendered on current generation hardware at 20-90 fps and respect the limited GPU memory budget. This is the author’s version of the paper. The ultimate version has been published in the I3D 2009 conference proceedings.

    Deep-Learning Realtime Upsampling Techniques in Video Games

    Get PDF
    This paper addresses the challenge of keeping up with the ever-increasing graphical complexity of video games and introduces a deep-learning approach to mitigating it. As games get more and more demanding in terms of their graphics, it becomes increasingly difficult to maintain high-quality images while also ensuring good performance. This is where deep learning super sampling (DLSS) comes in. The paper explains how DLSS works, including the use of convolutional autoencoder neural networks and various other techniques and technologies. It also covers how the network is trained and optimized, as well as how it incorporates temporal antialiasing and frame generation techniques to enhance the final image quality. We will also discuss the effectiveness of these techniques as well as compare their performance to running at native resolutions
    corecore