342 research outputs found

    Frameless Rendering

    Get PDF
    Cílem této práce je vytvoření jednoduchého raytraceru s pomocí knihovny IPP, který bude využívat techniku bezsnímkového renderování. První část práce je zaměřena na metodu sledování paprsku. V další části je rozebrána technika bezsnímkového renderování a její adaptivní verze se zaměřením na adaptivní vzorkování. Dále je zde popsána knihovna IPP a implementace jednoduchého raytraceru s pomocí této knihovny. Poslední část práce vyhodnocuje rychlost a kvalitu zobrazení implementovaného systému.The aim of this work is to create a simple raytracer with IPP library, which will use the frameless rendering technique. The first part of this work focuses on the raytracing method. The next part analyzes the frameless rendering technique and its adaptive version with focus on adaptive sampling. Third part describes the IPP library and implementation of a simple raytracer using this library. The last part evaluates the speed and rendering quality of the implemented system.

    Construction and Evaluation of an Ultra Low Latency Frameless Renderer for VR.

    Get PDF
    © 2016 IEEE.Latency-the delay between a users action and the response to this action-is known to be detrimental to virtual reality. Latency is typically considered to be a discrete value characterising a delay, constant in time and space-but this characterisation is incomplete. Latency changes across the display during scan-out, and how it does so is dependent on the rendering approach used. In this study, we present an ultra-low latency real-time ray-casting renderer for virtual reality, implemented on an FPGA. Our renderer has a latency of 1 ms from tracker to pixel. Its frameless nature means that the region of the display with the lowest latency immediately follows the scan-beam. This is in contrast to frame-based systems such as those using typical GPUs, for which the latency increases as scan-out proceeds. Using a series of high and low speed videos of our system in use, we confirm its latency of 1 ms. We examine how the renderer performs when driving a traditional sequential scan-out display on a readily available HMO, the Oculus Rift OK2. We contrast this with an equivalent apparatus built using a GPU. Using captured human head motion and a set of image quality measures, we assess the ability of these systems to faithfully recreate the stimuli of an ideal virtual reality system-one with a zero latency tracker, renderer and display running at 1 kHz. Finally, we examine the results of these quality measures, and how each rendering approach is affected by velocity of movement and display persistence. We find that our system, with a lower average latency, can more faithfully draw what the ideal virtual reality system would. Further, we find that with low display persistence, the sensitivity to velocity of both systems is lowered, but that it is much lower for ours

    Frameless Rendering

    Get PDF
    Tato diplomová práce se zabývá problémem zobrazovaní počítačové grafiky v reálném čase s využitím metody bezsnímkového renderování jako protipólu k tradičnímu způsobu, který je založen na přepínání výstupu mezi dvěma buffery. Metoda bezsnímkového renderování je zkoumána a definována do větší hloubky a detailně popsána její adaptivní varianta, která přináší kvalitnější výstup bez výraznějšího snížení odezvy. Dále tato práce popisuje implementaci aplikace, která byla vyvíjena pro demonstraci principu a funkčnosti metody bezsnímkového renderování na vybraných scénách a vyhodnocení prováděných testů se zaměřením na kvalitu výstupu.This master's thesis deals with the problem of real-time rendering of computer graphics using the method of frameless rendering} as counterpart to the traditional way, which is based on switching between two output buffers. Frameless rendering method is defined and studied in greater depth and its adaptive variant, which delivers better output quality without a~significant reduction of latency, is described in detail. In addition, this thesis describes the implementation of the application which has been developed to demonstrate the principle and functionality of the frameless rendering on the selected scenes. It also includes evaluation of performed tests focused to the output quality.

    Doctor of Philosophy

    Get PDF
    dissertationBalancing the trade off between the spatial and temporal quality of interactive computer graphics imagery is one of the fundamental design challenges in the construction of rendering systems. Inexpensive interactive rendering hardware may deliver a high level of temporal performance if the level of spatial image quality is sufficiently constrained. In these cases, the spatial fidelity level is an independent parameter of the system and temporal performance is a dependent variable. The spatial quality parameter is selected for the system by the designer based on the anticipated graphics workload. Interactive ray tracing is one example; the algorithm is often selected due to its ability to deliver a high level of spatial fidelity, and the relatively lower level of temporal performance isreadily accepted. This dissertation proposes an algorithm to perform fine-grained adjustments to the trade off between the spatial quality of images produced by an interactive renderer, and the temporal performance or quality of the rendered image sequence. The approach first determines the minimum amount of sampling work necessary to achieve a certain fidelity level, and then allows the surplus capacity to be directed towards spatial or temporal fidelity improvement. The algorithm consists of an efficient parallel spatial and temporal adaptive rendering mechanism and a control optimization problem which adjusts the sampling rate based on a characterization of the rendered imagery and constraints on the capacity of the rendering system

    Hacia el modelado 3d de tumores cerebrales mediante endoneurosonografía y redes neuronales

    Get PDF
    Las cirugías mínimamente invasivas se han vuelto populares debido a que implican menos riesgos con respecto a las intervenciones tradicionales. En neurocirugía, las tendencias recientes sugieren el uso conjunto de la endoscopia y el ultrasonido, técnica llamada endoneurosonografía (ENS), para la virtualización 3D de las estructuras del cerebro en tiempo real. La información ENS se puede utilizar para generar modelos 3D de los tumores del cerebro durante la cirugía. En este trabajo, presentamos una metodología para el modelado 3D de tumores cerebrales con ENS y redes neuronales. Específicamente, se estudió el uso de mapas auto-organizados (SOM) y de redes neuronales tipo gas (NGN). En comparación con otras técnicas, el modelado 3D usando redes neuronales ofrece ventajas debido a que la morfología del tumor se codifica directamente sobre los pesos sinápticos de la red, no requiere ningún conocimiento a priori y la representación puede ser desarrollada en dos etapas: entrenamiento fuera de línea y adaptación en línea. Se realizan pruebas experimentales con maniquíes médicos de tumores cerebrales. Al final del documento, se presentan los resultados del modelado 3D a partir de una base de datos ENS.Minimally invasive surgeries have become popular because they reduce the typical risks of traditional interventions. In neurosurgery, recent trends suggest the combined use of endoscopy and ultrasound (endoneurosonography or ENS) for 3D virtualization of brain structures in real time. The ENS information can be used to generate 3D models of brain tumors during a surgery. This paper introduces a methodology for 3D modeling of brain tumors using ENS and unsupervised neural networks. The use of self-organizing maps (SOM) and neural gas networks (NGN) is particularly studied. Compared to other techniques, 3D modeling using neural networks offers advantages, since tumor morphology is directly encoded in synaptic weights of the network, no a priori knowledge is required, and the representation can be developed in two stages: off-line training and on-line adaptation. Experimental tests were performed using virtualized phantom brain tumors. At the end of the paper, the results of 3D modeling from an ENS database are presented

    Adaptive frameless raycasting for interactive volume visualization

    Get PDF
    There have been many successful attempts to improve ray casting and ray tracing performance in the last decades. Many of these improvements form important steps towards high-performance interactive visualisation. However, growing challenges keep pace with enhancements: display resolutions skyrocket with modern technology and applications become more and more sophisticated. With the limits of Moore's law moving into sight, there have been many considerations about speeding up well-known algorithms, including a plenitude of publications on frameless rendering. In frameless renderers sampling is not synchronised with display refreshes. That allows for both spatially and temporally varying sample rates. One basic approach simply randomises samples entirely. This increases liveliness and reduces input delay, but also leads to distorted and blurred images during movements. Dayal et al. tackle this problem by focusing samples on complex regions and by applying approximating filters to reconstruct an image from incoherent buffer content. Their frameless ray tracer vastly reduces latency and yet produces outstanding image quality. In this thesis we transfer the concepts to volume ray casting. Volume data often poses different challenges due to its lack of plains and surfaces, and its fine granularity. We experiment with both Dayal's sampling and reconstruction techniques and examine their applicability on volume data. In particular, we examine whether their adaptive sampler performs as well on volume data and which adaptions might be necessary. Further, we develop another reconstruction filter which is designed to remove artefacts that frequently occur in our frameless renderer. Instead of assuming certain properties due to local sampling rates and colour gradients, our filter detects artefacts by their age signature in the buffer. Our filter seems to be more targeted and yet requires only constant time per pixel.In den letzten Jahrzehnten gab es zahlreiche Versuche, die Effizienz von Ray-Casting und Ray-Tracing zu verbessern. Viele dieser Verbesserungen bilden wichtige Schritte hin zu leistungsstarken, interaktiven Visualisierungen. Mit der Performanz steigen aber auch die Herausforderungen: die technisch möglichen Bildschirmauflösungen liegen um ein vieles höher und Anwendungen stellen immer größere Anforderungen an die Software. Da die Hardware langsam an die Grenzen von Moores Gesetz stößt, liegt der wissenschaftliche Fokus immer deutlicher auf der Verbesserung der Algorithmen, zum Beispiel durch frameless Rendering. Beim frameless Rendering ist das Sampling nicht mit dem Anzeigeprozess synchronisiert. Das bietet zusätzliche Freiheiten für Algorithmen: räumliche und zeitliche Abtastraten können so variieren. Ein grundlegender Ansatz randomisiert Samples mit einer Gleichverteilung. Das führt zu kleineren Eingabeverzögerungen und erhöht die Lebhaftigkeit der Visualisierung. Gleichermaßen werden aber Bilder durch Bewegungen verzerrt. Dayal et al. bewältigen dieses Problem durch zielgerichtetes Sampling (guided sampling). Dabei werden hohe Abtastraten auf komplexe Bildregionen fokussiert und in einfachen Bildregionen Rechenzeit eingespart. Außerdem werden Bildraumfilter verwendet, um aus den inkohärenten Daten ein möglichst wahrheitsgetreues Bild zu approximieren. Der frameless Ray-Tracer von Dayal et al. bietet stark reduzierte Latenz bei hervorragender Bildqualität. In dieser Arbeit übertragen wir die Konzepte auf Ray-Casting von Volumendaten. Volumendaten bieten oft andere Herausforderungen, da sie keinerlei Oberflächen aufweisen und oft sehr feingranulär sind. Wir experimentieren mit Dayals Sampling- und Rekonstruktionsmethoden und untersuchen deren Eignung für Volumendaten. Insbesondere untersuchen wir, ob deren adaptiver Sampler Volumendaten ebenso gut verarbeiten kann und welche Anpassungen eventuell nötig sind. Des Weiteren entwickeln wir einen eigenen Rekonstruktionsfilter, welcher speziell auf häufige Bildartefakte beim Rendern von Volumendaten ausgelegt ist. Anstatt, wie Dayal, den Filter an die lokale Abtastrate und Farbgradienten anzupassen, werden durch unseren Filter Artefakte anhand ihrer Alterssignatur erkannt. Dabei scheint unser Ansatz zielgerichteter und benötigt dennoch nur konstante Laufzeit pro Pixel

    Optimization techniques for computationally expensive rendering algorithms

    Get PDF
    Realistic rendering in computer graphics simulates the interactions of light and surfaces. While many accurate models for surface reflection and lighting, including solid surfaces and participating media have been described; most of them rely on intensive computation. Common practices such as adding constraints and assumptions can increase performance. However, they may compromise the quality of the resulting images or the variety of phenomena that can be accurately represented. In this thesis, we will focus on rendering methods that require high amounts of computational resources. Our intention is to consider several conceptually different approaches capable of reducing these requirements with only limited implications in the quality of the results. The first part of this work will study rendering of time-­¿varying participating media. Examples of this type of matter are smoke, optically thick gases and any material that, unlike the vacuum, scatters and absorbs the light that travels through it. We will focus on a subset of algorithms that approximate realistic illumination using images of real world scenes. Starting from the traditional ray marching algorithm, we will suggest and implement different optimizations that will allow performing the computation at interactive frame rates. This thesis will also analyze two different aspects of the generation of anti-­¿aliased images. One targeted to the rendering of screen-­¿space anti-­¿aliased images and the reduction of the artifacts generated in rasterized lines and edges. We expect to describe an implementation that, working as a post process, it is efficient enough to be added to existing rendering pipelines with reduced performance impact. A third method will take advantage of the limitations of the human visual system (HVS) to reduce the resources required to render temporally antialiased images. While film and digital cameras naturally produce motion blur, rendering pipelines need to explicitly simulate it. This process is known to be one of the most important burdens for every rendering pipeline. Motivated by this, we plan to run a series of psychophysical experiments targeted at identifying groups of motion-­¿blurred images that are perceptually equivalent. A possible outcome is the proposal of criteria that may lead to reductions of the rendering budgets

    Building Integrated Photovoltaics (BIPV): Review, Potentials, Barriers and Myths

    Get PDF
    To date, none of the predictions that have been made about the emerging BIPV industry have really hit the target. The anticipated boom has so far stalled and despite developing and promoting a number of excellent systems and products, many producers around the world have been forced to quit on purely economic grounds. The authors believe that after this painful cleansing of the market, a massive counter trend will follow, enlivened and carried forward by more advanced PV technologies and ever-stricter climate policies designed to achieve energy neutrality in a cost-effective way. As a result, the need for BIPV products for use in construction will undergo first a gradual and then a massive increase. The planning of buildings with multifunctional, integrated roof and façade elements capable of fulfilling the technical and legal demands will become an essential, accepted part of the architectonic mainstream and will also contribute to an aesthetic valorisation. Until then, various barriers need to be overcome in order to facilitate and accelerate BIPV. Besides issues related to mere cost-efficiency ratio, psychological and social factors also play an evident role. The goal of energy change linked to greater use of renewables can be successfully achieved only when all aspects are taken into account and when visual appeal and energy efficiency thus no longer appear to be an oxymoro

    RTSDF: Generating Signed Distance Fields in Real Time for Soft Shadow Rendering

    Full text link
    Signed Distance Fields (SDFs) for surface representation are commonly generated offline and subsequently loaded into interactive applications like games. Since they are not updated every frame, they only provide a rigid surface representation. While there are methods to generate them quickly on GPU, the efficiency of these approaches is limited at high resolutions. This paper showcases a novel technique that combines jump flooding and ray tracing to generate approximate SDFs in real-time for soft shadow approximation, achieving prominent shadow penumbras while maintaining interactive frame rates
    corecore