63 research outputs found
Practical morphological antialiasing on the GPU
International audienceThe subject of antialiasing techniques has been actively explored for the past 40 years. The classical approach involves computing the average of multiple samples for each final sample. Graphics hardware vendors implement various refinements of these algorithms. Computing multiple samples (MSAA) can be very costly depending on the complexity of the shading, or in the case of raytracing. Moreover, image-space techniques like deferred shading are incompatible with hardware implementation of MSAA since the lighting stage is decorrelated from the geometry stage. A filter based approach called Morphological Antialiasing (MLAA) was recently introduced [2009]. This technique does not need multiple samples and can efficiently be implemented on CPU using vector instructions. However, this filter is not linear and requires deep branching and image-wise knowledge which can be very inefficient on graphics hardware. We introduce an efficient adaptation of the MLAA algorithm running flawlessly on medium range GPUs
Morphological Antialiasing and Topological Reconstruction
International audienceMorphological antialiasing is a post-processing approach which does note require additional samples computation. This algorithm acts as a non-linear filter, ill-suited to massively parallel hardware architectures. We redesigned the initial method using multiple passes with, in particular, a new approach to line length computation. We also introduce in the method the notion of topological reconstruction to correct the weaknesses of postprocessing antialiasing techniques. Our method runs as a pure post-process filter providing full-image antialiasing at high framerates, competing with traditional MSAA
Optimization techniques for computationally expensive rendering algorithms
Realistic rendering in computer graphics simulates the interactions of light and surfaces. While many accurate models for surface reflection and lighting, including solid surfaces and participating media have been described; most of them rely on intensive computation. Common practices such as adding constraints and assumptions can increase performance. However, they may compromise the quality of the resulting images or the variety of phenomena that can be accurately represented. In this thesis, we will focus on rendering methods that require high amounts of computational resources. Our intention is to consider several conceptually different approaches capable of reducing these requirements with only limited implications in the quality of the results. The first part of this work will study rendering of time-¿varying participating media. Examples of this type of matter are smoke, optically thick gases and any material that, unlike the vacuum, scatters and absorbs the light that travels through it. We will focus on a subset of algorithms that approximate realistic illumination using images of real world scenes. Starting from the traditional ray marching algorithm, we will suggest and implement different optimizations that will allow performing the computation at interactive frame rates. This thesis will also analyze two different aspects of the generation of anti-¿aliased images. One targeted to the rendering of screen-¿space anti-¿aliased images and the reduction of the artifacts generated in rasterized lines and edges. We expect to describe an implementation that, working as a post process, it is efficient enough to be added to existing rendering pipelines with reduced performance impact. A third method will take advantage of the limitations of the human visual system (HVS) to reduce the resources required to render temporally antialiased images. While film and digital cameras naturally produce motion blur, rendering pipelines need to explicitly simulate it. This process is known to be one of the most important burdens for every rendering pipeline. Motivated by this, we plan to run a series of psychophysical experiments targeted at identifying groups of motion-¿blurred images that are perceptually equivalent. A possible outcome is the proposal of criteria that may lead to reductions of the rendering budgets
Ray Tracing Gems
This book is a must-have for anyone serious about rendering in real time. With the announcement of new ray tracing APIs and hardware to support them, developers can easily create real-time applications with ray tracing as a core component. As ray tracing on the GPU becomes faster, it will play a more central role in real-time rendering. Ray Tracing Gems provides key building blocks for developers of games, architectural applications, visualizations, and more. Experts in rendering share their knowledge by explaining everything from nitty-gritty techniques that will improve any ray tracer to mastery of the new capabilities of current and future hardware. What you'll learn: The latest ray tracing techniques for developing real-time applications in multiple domains Guidance, advice, and best practices for rendering applications with Microsoft DirectX Raytracing (DXR) How to implement high-performance graphics for interactive visualizations, games, simulations, and more Who this book is for: Developers who are looking to leverage the latest APIs and GPU technology for real-time rendering and ray tracing Students looking to learn about best practices in these areas Enthusiasts who want to understand and experiment with their new GPU
On-the-Fly Power-Aware Rendering
Power saving is a prevailing concern in desktop computers and, especially, in battery-powered devices such as mobile phones. This is generating a growing demand for power-aware graphics applications that can extend battery life, while preserving good quality. In this paper, we address this issue by presenting a real-time power-efficient rendering framework, able to dynamically select the rendering configuration with the best quality within a given power budget. Different from the current state of the art, our method does not require precomputation of the whole camera-view space, nor Pareto curves to explore the vast power-error space; as such, it can also handle dynamic scenes. Our algorithm is based on two key components: our novel power prediction model, and our runtime quality error estimation mechanism. These components allow us to search for the optimal rendering configuration at runtime, being transparent to the user. We demonstrate the performance of our framework on two different platforms: a desktop computer, and a mobile device. In both cases, we produce results close to the maximum quality, while achieving significant power savings
All-passive pixel super-resolution of time-stretch imaging
Based on image encoding in a serial-temporal format, optical time-stretch
imaging entails a stringent requirement of state-of-the- art fast data
acquisition unit in order to preserve high image resolution at an ultrahigh
frame rate --- hampering the widespread utilities of such technology. Here, we
propose a pixel super-resolution (pixel-SR) technique tailored for time-stretch
imaging that preserves pixel resolution at a relaxed sampling rate. It
harnesses the subpixel shifts between image frames inherently introduced by
asynchronous digital sampling of the continuous time-stretch imaging process.
Precise pixel registration is thus accomplished without any active
opto-mechanical subpixel-shift control or other additional hardware. Here, we
present the experimental pixel-SR image reconstruction pipeline that restores
high-resolution time-stretch images of microparticles and biological cells
(phytoplankton) at a relaxed sampling rate (approx. 2--5 GSa/s) --- more than
four times lower than the originally required readout rate (20 GSa/s) --- is
thus effective for high-throughput label-free, morphology-based cellular
classification down to single-cell precision. Upon integration with the
high-throughput image processing technology, this pixel-SR time- stretch
imaging technique represents a cost-effective and practical solution for large
scale cell-based phenotypic screening in biomedical diagnosis and machine
vision for quality control in manufacturing.Comment: 17 pages, 8 figure
Estudo Focado no Utilizador de Software Gratuito para Modelação e Visualização 3D Realista
Computação gráfica um campo que tem vindo a crescer bastante nos últimos anos, desde
áreas como cinematográficas, dos videojogos, da animação, o avanço tem sido tão grande que
a semelhança com a realidade é cada vez maior. Praticamente hoje em dia todos os filmes
têm efeitos gerados através de computação gráfica, até simples anúncios de televisão para
não falar do realismo dos videojogos de hoje.
Este estudo tem como objectivo mostrar duas alternativas no mundo da computação
gráfica, como tal, vão ser usados dois programas, Blender e Unreal Engine. O cenário em
questão será todo modelado de raiz e será o mesmo nos dois programas. Serão feitos vários
renders ao cenário, em ambos os programas usando diferentes materiais, diferentes tipos de
iluminação, em tempo real e não de forma a mostrar as várias alternativas possíveis.The field is growing very much in the couple last years, areas like cinematographic, video
games, architecture are seeing a big step up in terms of the image quality, and the realism
nowadays is huge. Almost every movie, game, is made using computer graphics processes,
even some television ads use computer graphics.
The objective of the study is to show two render techniques from two different
programs, Blender and Unreal Engine. To do that, some models are going to be model in both
programs, then different materials, different types of illumination are going to be tested to
show some possible alternatives
Reaaliaikaisten antialiasiontimenetelmien vertailu virtuaalilaseilla
Virtual reality and head-mounted devices have gained popularity in the past few years. Their increased field-of-view combined with a display that is near to the eyes have increased the importance of anti-aliasing i.e. softening of the visible jagged edges resulting from insufficient rendering resolution.
In this thesis, elementary theory of real-time rendering, anti-aliasing and virtual reality is studied. Based on the theory and review of recent studies, multisample anti-aliasing (MSAA), fast-approximate anti-aliasing (FXAA) and temporal anti-aliasing (TAA) were implemented into a real-time deferred rendering engine and the different techniques were studied in both subjective image quality and objective performance measures. In the scope of this thesis, only each methods’ ability to prevent or lessen jagged edges and small flickering detailed geometries is examined.
Performance was measured on two different machines; the FXAA implementation was found to be the fastest with 3% impact on performance and required the least memory, the TAA performance impact was 10-11% and 22% to 62% for MSAA was depending on the sample count.
Each techniques’ ability to prevent or reduce aliasing was examined by measuring the visual quality and fatigue reported by participants. Each anti-aliasing method was presented in a 3D scene using Oculus Rift CV1.
The results indicate that the 4xMSAA and 2xMSAA had clearly the best visual quality and made participants the least fatigued. FXAA appears visually not as good, but did not cause significant fatigue. TAA appeared slightly blurry for the most of the participants, and this caused them to experience more fatigue.
This study emphasizes the need for understanding the human visual system when developing real-time graphics for virtual reality application.Virtuaalitodellisuus (VR) ja VR-lasit ovat yleistyneet viime vuosina. VR-lasien huomattavasti suuremman näkökentän sekä lähelle silmiä tulevan näytön vuoksi antialiasointi, eli reunojen pehmennystekniikoista, on tullut tärkeäksi.
Diplomityössä tehdään kirjallisuuskatsaus reaaliaikarenderöinnin, antialiasoinnin sekä virtuaalitodellisuuden perusteisiin. Teoriaan sekä viimeaikaisiin tutkimuksiin perustuen kolme antialiasointimenetelmää fast-approximate (FXAA), temporaalinen (TAA) sekä moninäytteistys (MSAA) ovat valittu implementoitavaksi reaaliaikaohjelmistoon ja tarkemmin tutkittavaksi suorituskyvyn sekä subjektiivisesti testattavan visuaalisen laadun puolesta. Diplomityö keskittyy visuaalisessa laadussa tutkimaan vain eri menetelmien kykyä estää tai redusoida reunojen antialiasointia ja esimerkiksi pienien geometristen objektien yksityiskohtien välkkymistä.
Suorituskyvyn mittauksissa FXAA oli menetelmistä nopein (3% menetys suorituskyvyssä), TAA 10-11% menetys suorituskyvyssä sekä MSAA hitain 22-62% suorituskyvyn menetyksellä.
Subjektiivisen laadun testillä mitattiin kokemuksen laatua, joka koostui visuaalisen laadun sekä uupumuksen arvostelusta eri tapauksissa. Ärsykkeet eli eri antialiasointimenetelmät esitettiin reaaliaikaisessa 3D-ympäristössä, jota katsottiin Oculus Rift CV1 -virtuaalilaseilla.
Tulosten mukaan neljän sekä kahden näytteen versiot MSAA:sta olivat selkesti visuaalisesti laadukkaimmat sekä aiheuttivat vähiten uupuneisuutta koehenkilöissä. FXAA havaittiin laadultaan hiekommaksi, mutta ei MSAA:ta enemmän uupumusta aiheuttavaski. TAA aiheutti selkeästi eniten uupumusta sekä oli laadullisesti huonoin liiallisen pehmeyden ja haamuefektin vuoksi.
Tämä tutkimus painottaa ihmisen näköjärjestelmän ymmärrystä kehittäessä reaaliaikagrafiikkaa VR-ohjelmistoihin
- …