9 research outputs found

    An asynchronous method for cloud-based rendering

    Get PDF
    Interactive high-fidelity rendering is still unachievable on many consumer devices. Cloud gaming services have shown promise in delivering interactive graphics beyond the individual capabilities of user devices. However, a number of shortcomings are manifest in these systems: high network bandwidths are required for higher resolutions and input lag due to network fluctuations heavily disrupts user experience. In this paper, we present a scalable solution for interactive high-fidelity graphics based on a distributed rendering pipeline where direct lighting is computed on the client device and indirect lighting in the cloud. The client device keeps a local cache for indirect lighting which is asynchronously updated using an object space representation; this allows us to achieve interactive rates that are unconstrained by network performance for a wide range of display resolutions that are also robust to input lag. Furthermore, in multi-user environments, the computation of indirect lighting is amortised over participating clients

    Scalable Remote Rendering using Synthesized Image Quality Assessment

    Get PDF
    Depth-image-based rendering (DIBR) is widely used to support 3D interactive graphics on low-end mobile devices. Although it reduces the rendering cost on a mobile device, it essentially turns such a cost into depth image transmission cost or bandwidth consumption, inducing performance bottleneck to a remote rendering system. To address this problem, we design a scalable remote rendering framework based on synthesized image quality assessment. Specially, we design an efficient synthesized image quality metric based on Just Noticeable Distortion (JND), properly measuring human perceived geometric distortions in synthesized images. Based on this, we predict quality-aware reference viewpoints, with viewpoint intervals optimized by the JND-based metric. An adaptive transmission scheme is also developed to control depth image transmission based on perceived quality and network bandwidth availability. Experiment results show that our approach effectively reduces transmission frequency and network bandwidth consumption with perceived quality on mobile devices maintained. A prototype system is implemented to demonstrate the scalability of our proposed framework to multiple clients

    Using per-pixel linked lists for transparency effects in remote-rendering

    Get PDF
    Modern graphic cards are highly versatile because they allow the programmer to load custom code to execute onto them. This can be used to construct a structure called a per-pixel linked list, which contains all fragments composing the scene. However, with the need to render more and more complex geometry, even the most powerful hardware reaches its limit fast. To overcome this problem, the geometry is rendered on multiple systems instead of one, and finally put together for rendering. This is called remote rendering and works well for opaque scenes. The goal of this thesis is to conquer rendering of transparent objects remotely using per-pixel linked lists. Since rendering those objects requires a step called blending, standard approaches are incapable of displaying them. Three different methods are shown, compared and analyzed for their usability and performance. First, limiting the amount of depth layers is discussed. Second, identifying regions of visual change is used to reduce the amount of data to be sent. Finally, a way for reusing the previously sent fragments for the current frame is studied in detail.Moderne Grafikkarten sind sehr flexibel aufgrund ihrer Fähigkeit, vom Programmierer verfassten Code auszuführen. Dies kann dazu genutzt werden, eine Datenstruktur names Per-Pixel Linked List zu erstellen. Diese enthält alle Fragmente der zu rendernden Szene. Da aber aufgrund der immer komplexer werdenden Geometrie die Anforderungen stark steigen, erreichen selbst leistungsfähige Grafikkarten schnell ihre Grenzen. Um dieses Problem zu lösen bietet sich das sogenannte Remote Rendering an, bei welchem die Rechenlast auf mehrere Computer im Netzwerk verteilt und am Ende die einzelnen Zwischenbilder zu einem Gesamtbild vereint werden. Für opake Szenen gibt es bereits funktionierende Lösungen. Das Ziel dieser Arbeit ist die Behandlung transparenter Geometrie im Kontext des Remote Renderings mit Hilfe von Per-Pixel Linked Lists. Da das Rendern transparenter Objekte eine Operation namens Blending erfordert, sind die bisherigen Algorithmen meist nicht geeignet. Es werden drei verschiedene Methoden vorgestellt, analysiert und ihre Brauchbarkeit und Geschwindigkeit verglichen. Als erstes wird ein Verfahren, welches die Anzahl an Tiefenebenen limitiert, beleuchtet. Das zweite Verfahren versucht anhand optischer Differenzen zwischen dem aktuellen und dem vorausgegangenen Bild diejenigen Bereiche zu ermitteln, welche für ein korrektes Endbild übertragen werden müssen. Als letzte Technik wird ein Ansatz vorgestellt, welcher die Verwendung von bereits übertragener Fragmente dazu benutzt, um Fragmente einzusparen indem diese wiederverwendet werden

    Scalable Remote Rendering with Depth and Motion-flow Augmented Streaming

    No full text
    frame n 4 frame n-0.5 frame n-1 Figure 1: Remote rendering allows navigating in complex scenes even on weak client hardware. But not only final images are of interest on the client side, auxiliary information like depth or motion become increasingly attractive in this context for various purposes. Examples include spatio-temporal upsampling (1, 2), 3D stereo rendering (3), or frame extrapolation (4). Standard encoders (H.264 in image 1) are currently not always well-adapted to such streams and our contribution is a novel method to efficiently encode and decode augmented video streams with high-quality (compare insets in image 1 and 2). In this paper, we focus on efficient compression and streaming of frames rendered from a dynamic 3D model. Remote rendering and on-the-fly streaming become increasingly attractive for interactive applications. Data is kept confidential and only images are sent to the client. Even if the client’s hardware resources are modest, the user can interact with state-of-the-art rendering applications executed on the server. Our solution focuses on augmented video information, e.g., by depth, which is key to increase robustness with respect to data loss, image reconstruction

    Efficient streaming for high fidelity imaging

    Get PDF
    Researchers and practitioners of graphics, visualisation and imaging have an ever-expanding list of technologies to account for, including (but not limited to) HDR, VR, 4K, 360°, light field and wide colour gamut. As these technologies move from theory to practice, the methods of encoding and transmitting this information need to become more advanced and capable year on year, placing greater demands on latency, bandwidth, and encoding performance. High dynamic range (HDR) video is still in its infancy; the tools for capture, transmission and display of true HDR content are still restricted to professional technicians. Meanwhile, computer graphics are nowadays near-ubiquitous, but to achieve the highest fidelity in real or even reasonable time a user must be located at or near a supercomputer or other specialist workstation. These physical requirements mean that it is not always possible to demonstrate these graphics in any given place at any time, and when the graphics in question are intended to provide a virtual reality experience, the constrains on performance and latency are even tighter. This thesis presents an overall framework for adapting upcoming imaging technologies for efficient streaming, constituting novel work across three areas of imaging technology. Over the course of the thesis, high dynamic range capture, transmission and display is considered, before specifically focusing on the transmission and display of high fidelity rendered graphics, including HDR graphics. Finally, this thesis considers the technical challenges posed by incoming head-mounted displays (HMDs). In addition, a full literature review is presented across all three of these areas, detailing state-of-the-art methods for approaching all three problem sets. In the area of high dynamic range capture, transmission and display, a framework is presented and evaluated for efficient processing, streaming and encoding of high dynamic range video using general-purpose graphics processing unit (GPGPU) technologies. For remote rendering, state-of-the-art methods of augmenting a streamed graphical render are adapted to incorporate HDR video and high fidelity graphics rendering, specifically with regards to path tracing. Finally, a novel method is proposed for streaming graphics to a HMD for virtual reality (VR). This method utilises 360° projections to transmit and reproject stereo imagery to a HMD with minimal latency, with an adaptation for the rapid local production of depth maps

    Interactive High Performance Volume Rendering

    Get PDF
    This thesis is about Direct Volume Rendering on high performance computing systems. As direct rendering methods do not create a lower-dimensional geometric representation, the whole scientific dataset must be kept in memory. Thus, this family of algorithms has a tremendous resource demand. Direct Volume Rendering algorithms in general are well suited to be implemented for dedicated graphics hardware. Nevertheless, high performance computing systems often do not provide resources for hardware accelerated rendering, so that the visualization algorithm must be implemented for the available general-purpose hardware. Ever growing datasets that imply copying large amounts of data from the compute system to the workstation of the scientist, and the need to review intermediate simulation results, make porting Direct Volume Rendering to high performance computing systems highly relevant. The contribution of this thesis is twofold. As part of the first contribution, after devising a software architecture for general implementations of Direct Volume Rendering on highly parallel platforms, parallelization issues and implementation details for various modern architectures are discussed. The contribution results in a highly parallel implementation that tackles several platforms. The second contribution is concerned with the display phase of the “Distributed Volume Rendering Pipeline”. Rendering on a high performance computing system typically implies displaying the rendered result at a remote location. This thesis presents a remote rendering technique that is capable of hiding latency and can thus be used in an interactive environment

    High-fidelity graphics using unconventional distributed rendering approaches

    Get PDF
    High-fidelity rendering requires a substantial amount of computational resources to accurately simulate lighting in virtual environments. While desktop computing, with the aid of modern graphics hardware, has shown promise in delivering realistic rendering at interactive rates, real-time rendering of moderately complex scenes is still unachievable on the majority of desktop machines and the vast plethora of mobile computing devices that have recently become commonplace. This work provides a wide range of computing devices with high-fidelity rendering capabilities via oft-unused distributed computing paradigms. It speeds up the rendering process on formerly capable devices and provides full functionality to incapable devices. Novel scheduling and rendering algorithms have been designed to best take advantage of the characteristics of these systems and demonstrate the efficacy of such distributed methods. The first is a novel system that provides multiple clients with parallel resources for rendering a single task, and adapts in real-time to the number of concurrent requests. The second is a distributed algorithm for the remote asynchronous computation of the indirect diffuse component, which is merged with locally-computed direct lighting for a full global illumination solution. The third is a method for precomputing indirect lighting information for dynamically-generated multi-user environments by using the aggregated resources of the clients themselves. The fourth is a novel peer-to-peer system for improving the rendering performance in multi-user environments through the sharing of computation results, propagated via a mechanism based on epidemiology. The results demonstrate that the boundaries of the distributed computing typically used for computer graphics can be significantly and successfully expanded by adapting alternative distributed methods
    corecore